Mar 20 15:38:59 crc systemd[1]: Starting Kubernetes Kubelet... Mar 20 15:38:59 crc restorecon[4687]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:38:59 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 15:39:00 crc restorecon[4687]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 15:39:00 crc restorecon[4687]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Mar 20 15:39:01 crc kubenswrapper[4730]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 15:39:01 crc kubenswrapper[4730]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 20 15:39:01 crc kubenswrapper[4730]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 15:39:01 crc kubenswrapper[4730]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 15:39:01 crc kubenswrapper[4730]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 20 15:39:01 crc kubenswrapper[4730]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.225436 4730 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232730 4730 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232755 4730 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232763 4730 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232770 4730 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232777 4730 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232784 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232789 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232796 4730 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232804 4730 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232811 4730 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232815 4730 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232820 4730 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232825 4730 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232832 4730 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232837 4730 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232841 4730 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232846 4730 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232853 4730 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232859 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232864 4730 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232870 4730 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232875 4730 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232880 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232884 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232890 4730 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232895 4730 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232900 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232904 4730 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232909 4730 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232914 4730 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232919 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232924 4730 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232929 4730 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232934 4730 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232938 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232943 4730 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232948 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232952 4730 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232957 4730 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232965 4730 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232972 4730 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232979 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232984 4730 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232989 4730 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232995 4730 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.232999 4730 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.233004 4730 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.233009 4730 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.233013 4730 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.233020 4730 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.233025 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.233030 4730 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.233035 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.233040 4730 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.233044 4730 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.233049 4730 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.233054 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.233059 4730 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.233064 4730 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.233068 4730 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.233073 4730 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.233077 4730 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.233082 4730 feature_gate.go:330] unrecognized feature gate: Example Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.233087 4730 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.233091 4730 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.233096 4730 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.233100 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.233107 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.233111 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.233116 4730 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.233120 4730 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236111 4730 flags.go:64] FLAG: --address="0.0.0.0" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236136 4730 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236167 4730 flags.go:64] FLAG: --anonymous-auth="true" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236184 4730 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236191 4730 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236197 4730 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236206 4730 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236213 4730 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236219 4730 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236225 4730 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236232 4730 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236238 4730 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236257 4730 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236262 4730 flags.go:64] FLAG: --cgroup-root="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236267 4730 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236274 4730 flags.go:64] FLAG: --client-ca-file="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236280 4730 flags.go:64] FLAG: --cloud-config="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236286 4730 flags.go:64] FLAG: --cloud-provider="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236291 4730 flags.go:64] FLAG: --cluster-dns="[]" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236299 4730 flags.go:64] FLAG: --cluster-domain="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236304 4730 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236310 4730 flags.go:64] FLAG: --config-dir="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236317 4730 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236323 4730 flags.go:64] FLAG: --container-log-max-files="5" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236329 4730 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236334 4730 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236338 4730 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236343 4730 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236348 4730 flags.go:64] FLAG: --contention-profiling="false" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236352 4730 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236357 4730 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236363 4730 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236367 4730 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236373 4730 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236377 4730 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236381 4730 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236386 4730 flags.go:64] FLAG: --enable-load-reader="false" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236392 4730 flags.go:64] FLAG: --enable-server="true" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236397 4730 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236404 4730 flags.go:64] FLAG: --event-burst="100" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236409 4730 flags.go:64] FLAG: --event-qps="50" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236414 4730 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236419 4730 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236425 4730 flags.go:64] FLAG: --eviction-hard="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236432 4730 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236437 4730 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236442 4730 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236446 4730 flags.go:64] FLAG: --eviction-soft="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236451 4730 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236455 4730 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236459 4730 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236463 4730 flags.go:64] FLAG: --experimental-mounter-path="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236468 4730 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236472 4730 flags.go:64] FLAG: --fail-swap-on="true" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236476 4730 flags.go:64] FLAG: --feature-gates="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236481 4730 flags.go:64] FLAG: --file-check-frequency="20s" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236486 4730 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236490 4730 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236494 4730 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236499 4730 flags.go:64] FLAG: --healthz-port="10248" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236503 4730 flags.go:64] FLAG: --help="false" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236507 4730 flags.go:64] FLAG: --hostname-override="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236512 4730 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236517 4730 flags.go:64] FLAG: --http-check-frequency="20s" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236521 4730 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236525 4730 flags.go:64] FLAG: --image-credential-provider-config="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236529 4730 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236533 4730 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236537 4730 flags.go:64] FLAG: --image-service-endpoint="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236542 4730 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236546 4730 flags.go:64] FLAG: --kube-api-burst="100" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236550 4730 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236554 4730 flags.go:64] FLAG: --kube-api-qps="50" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236565 4730 flags.go:64] FLAG: --kube-reserved="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236569 4730 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236573 4730 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236577 4730 flags.go:64] FLAG: --kubelet-cgroups="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236581 4730 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236585 4730 flags.go:64] FLAG: --lock-file="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236589 4730 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236594 4730 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236597 4730 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236605 4730 flags.go:64] FLAG: --log-json-split-stream="false" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236609 4730 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236614 4730 flags.go:64] FLAG: --log-text-split-stream="false" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236618 4730 flags.go:64] FLAG: --logging-format="text" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236622 4730 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236627 4730 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236631 4730 flags.go:64] FLAG: --manifest-url="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236635 4730 flags.go:64] FLAG: --manifest-url-header="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236641 4730 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236646 4730 flags.go:64] FLAG: --max-open-files="1000000" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236651 4730 flags.go:64] FLAG: --max-pods="110" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236656 4730 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236664 4730 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236668 4730 flags.go:64] FLAG: --memory-manager-policy="None" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236673 4730 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236677 4730 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236682 4730 flags.go:64] FLAG: --node-ip="192.168.126.11" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236686 4730 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236698 4730 flags.go:64] FLAG: --node-status-max-images="50" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236702 4730 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236706 4730 flags.go:64] FLAG: --oom-score-adj="-999" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236710 4730 flags.go:64] FLAG: --pod-cidr="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236714 4730 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236726 4730 flags.go:64] FLAG: --pod-manifest-path="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236730 4730 flags.go:64] FLAG: --pod-max-pids="-1" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236735 4730 flags.go:64] FLAG: --pods-per-core="0" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236739 4730 flags.go:64] FLAG: --port="10250" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236744 4730 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236750 4730 flags.go:64] FLAG: --provider-id="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236754 4730 flags.go:64] FLAG: --qos-reserved="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236758 4730 flags.go:64] FLAG: --read-only-port="10255" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236763 4730 flags.go:64] FLAG: --register-node="true" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236768 4730 flags.go:64] FLAG: --register-schedulable="true" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236773 4730 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236780 4730 flags.go:64] FLAG: --registry-burst="10" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236784 4730 flags.go:64] FLAG: --registry-qps="5" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236788 4730 flags.go:64] FLAG: --reserved-cpus="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236792 4730 flags.go:64] FLAG: --reserved-memory="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236798 4730 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236802 4730 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236806 4730 flags.go:64] FLAG: --rotate-certificates="false" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236811 4730 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236815 4730 flags.go:64] FLAG: --runonce="false" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236819 4730 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236824 4730 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236829 4730 flags.go:64] FLAG: --seccomp-default="false" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236833 4730 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236837 4730 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236841 4730 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236846 4730 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236850 4730 flags.go:64] FLAG: --storage-driver-password="root" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236854 4730 flags.go:64] FLAG: --storage-driver-secure="false" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236858 4730 flags.go:64] FLAG: --storage-driver-table="stats" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236862 4730 flags.go:64] FLAG: --storage-driver-user="root" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236866 4730 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236870 4730 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236874 4730 flags.go:64] FLAG: --system-cgroups="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236878 4730 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236885 4730 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236890 4730 flags.go:64] FLAG: --tls-cert-file="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236894 4730 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236899 4730 flags.go:64] FLAG: --tls-min-version="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236902 4730 flags.go:64] FLAG: --tls-private-key-file="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236911 4730 flags.go:64] FLAG: --topology-manager-policy="none" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236915 4730 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236919 4730 flags.go:64] FLAG: --topology-manager-scope="container" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236923 4730 flags.go:64] FLAG: --v="2" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236929 4730 flags.go:64] FLAG: --version="false" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236936 4730 flags.go:64] FLAG: --vmodule="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236942 4730 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.236949 4730 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237091 4730 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237097 4730 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237101 4730 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237105 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237109 4730 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237112 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237116 4730 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237121 4730 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237125 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237128 4730 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237132 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237136 4730 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237140 4730 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237144 4730 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237148 4730 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237152 4730 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237156 4730 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237160 4730 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237165 4730 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237170 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237175 4730 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237179 4730 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237184 4730 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237187 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237194 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237198 4730 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237202 4730 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237206 4730 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237211 4730 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237215 4730 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237219 4730 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237223 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237227 4730 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237231 4730 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237236 4730 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237241 4730 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237268 4730 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237272 4730 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237276 4730 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237280 4730 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237283 4730 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237287 4730 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237291 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237294 4730 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237298 4730 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237301 4730 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237305 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237309 4730 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237312 4730 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237316 4730 feature_gate.go:330] unrecognized feature gate: Example Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237319 4730 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237323 4730 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237327 4730 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237335 4730 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237339 4730 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237342 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237348 4730 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237352 4730 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237356 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237359 4730 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237363 4730 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237367 4730 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237371 4730 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237374 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237380 4730 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237384 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237388 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237392 4730 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237396 4730 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237400 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.237404 4730 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.237410 4730 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.248996 4730 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.249046 4730 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249120 4730 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249129 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249134 4730 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249138 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249143 4730 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249147 4730 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249150 4730 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249154 4730 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249159 4730 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249167 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249171 4730 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249175 4730 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249180 4730 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249184 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249187 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249190 4730 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249194 4730 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249198 4730 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249201 4730 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249205 4730 feature_gate.go:330] unrecognized feature gate: Example Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249208 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249212 4730 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249215 4730 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249219 4730 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249222 4730 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249226 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249230 4730 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249234 4730 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249240 4730 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249270 4730 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249275 4730 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249281 4730 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249286 4730 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249294 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249300 4730 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249305 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249310 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249317 4730 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249323 4730 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249328 4730 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249333 4730 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249337 4730 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249340 4730 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249344 4730 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249348 4730 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249352 4730 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249355 4730 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249359 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249364 4730 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249369 4730 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249373 4730 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249377 4730 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249381 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249385 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249389 4730 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249392 4730 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249396 4730 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249399 4730 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249403 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249406 4730 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249410 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249413 4730 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249417 4730 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249420 4730 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249424 4730 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249428 4730 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249432 4730 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249435 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249438 4730 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249442 4730 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249448 4730 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.249455 4730 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249605 4730 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249613 4730 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249617 4730 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249621 4730 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249625 4730 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249629 4730 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249632 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249637 4730 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249640 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249644 4730 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249648 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249651 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249655 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249659 4730 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249662 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249666 4730 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249669 4730 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249673 4730 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249676 4730 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249680 4730 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249683 4730 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249686 4730 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249691 4730 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249696 4730 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249701 4730 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249705 4730 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249708 4730 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249712 4730 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249715 4730 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249719 4730 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249722 4730 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249726 4730 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249729 4730 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249733 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249736 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249740 4730 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249743 4730 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249747 4730 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249750 4730 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249754 4730 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249757 4730 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249761 4730 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249764 4730 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249768 4730 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249772 4730 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249776 4730 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249780 4730 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249784 4730 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249789 4730 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249793 4730 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249796 4730 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249800 4730 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249805 4730 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249809 4730 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249814 4730 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249818 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249822 4730 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249826 4730 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249830 4730 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249834 4730 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249838 4730 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249841 4730 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249845 4730 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249849 4730 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249853 4730 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249856 4730 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249860 4730 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249863 4730 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249867 4730 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249871 4730 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.249875 4730 feature_gate.go:330] unrecognized feature gate: Example Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.249880 4730 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.251142 4730 server.go:940] "Client rotation is on, will bootstrap in background" Mar 20 15:39:01 crc kubenswrapper[4730]: E0320 15:39:01.255017 4730 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.258048 4730 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.258140 4730 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.260407 4730 server.go:997] "Starting client certificate rotation" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.260440 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.260629 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.296125 4730 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 20 15:39:01 crc kubenswrapper[4730]: E0320 15:39:01.300865 4730 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.302170 4730 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.325005 4730 log.go:25] "Validated CRI v1 runtime API" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.367596 4730 log.go:25] "Validated CRI v1 image API" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.370327 4730 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.381992 4730 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-03-20-15-33-45-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.382083 4730 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.423322 4730 manager.go:217] Machine: {Timestamp:2026-03-20 15:39:01.41875618 +0000 UTC m=+0.632127629 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:dfe7d645-fe91-432e-8360-ef4633bfea29 BootID:666d62d4-aa52-41cc-be79-8c9a068e7752 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:78:58:e2 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:78:58:e2 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:0e:90:fd Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:c7:54:5a Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:46:f0:23 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:2a:00:b3 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:76:6f:3d:8a:c5:84 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:da:13:98:05:cc:72 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.423716 4730 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.423992 4730 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.426762 4730 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.427051 4730 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.427100 4730 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.427452 4730 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.427469 4730 container_manager_linux.go:303] "Creating device plugin manager" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.428174 4730 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.428208 4730 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.428529 4730 state_mem.go:36] "Initialized new in-memory state store" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.428652 4730 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.433818 4730 kubelet.go:418] "Attempting to sync node with API server" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.433867 4730 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.433890 4730 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.433911 4730 kubelet.go:324] "Adding apiserver pod source" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.433929 4730 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.441724 4730 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.448832 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Mar 20 15:39:01 crc kubenswrapper[4730]: E0320 15:39:01.450755 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.451912 4730 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.449174 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Mar 20 15:39:01 crc kubenswrapper[4730]: E0320 15:39:01.452179 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.455610 4730 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.458336 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.458534 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.458621 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.458674 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.458730 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.458788 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.458861 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.458928 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.458984 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.459046 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.459106 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.459155 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.462725 4730 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.463820 4730 server.go:1280] "Started kubelet" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.464445 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.465134 4730 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.465280 4730 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 15:39:01 crc systemd[1]: Started Kubernetes Kubelet. Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.465927 4730 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.468344 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.468418 4730 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 15:39:01 crc kubenswrapper[4730]: E0320 15:39:01.469314 4730 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.469469 4730 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.469514 4730 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.469590 4730 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 20 15:39:01 crc kubenswrapper[4730]: E0320 15:39:01.470344 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="200ms" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.470814 4730 server.go:460] "Adding debug handlers to kubelet server" Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.473172 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Mar 20 15:39:01 crc kubenswrapper[4730]: E0320 15:39:01.473282 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Mar 20 15:39:01 crc kubenswrapper[4730]: E0320 15:39:01.470241 4730 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.162:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189e96d438fabf17 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.463744279 +0000 UTC m=+0.677115668,LastTimestamp:2026-03-20 15:39:01.463744279 +0000 UTC m=+0.677115668,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.478395 4730 factory.go:55] Registering systemd factory Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.478809 4730 factory.go:221] Registration of the systemd container factory successfully Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.480030 4730 factory.go:153] Registering CRI-O factory Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.480077 4730 factory.go:221] Registration of the crio container factory successfully Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.480169 4730 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.480218 4730 factory.go:103] Registering Raw factory Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.480244 4730 manager.go:1196] Started watching for new ooms in manager Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.481641 4730 manager.go:319] Starting recovery of all containers Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492092 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492163 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492184 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492197 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492211 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492224 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492237 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492271 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492290 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492303 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492317 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492334 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492347 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492362 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492374 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492386 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492403 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492415 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492431 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492454 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492477 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492495 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492514 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492567 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492580 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492612 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492628 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492642 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492679 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492694 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492707 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492723 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492748 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492767 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492783 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492801 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492815 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492831 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492846 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492865 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492891 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492908 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492929 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492946 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492964 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492980 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.492999 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493016 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493031 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493045 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493059 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493073 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493098 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493113 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493129 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493141 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493155 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493171 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493184 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493197 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493210 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493235 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493265 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493278 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493291 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493304 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493316 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493354 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493365 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493382 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493397 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493409 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493421 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493437 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493450 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493464 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493484 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493507 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493526 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.493543 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496008 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496041 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496053 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496071 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496083 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496094 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496105 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496118 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496139 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496154 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496166 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496182 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496193 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496211 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496224 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496237 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496270 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496284 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496298 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496312 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496325 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496339 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496351 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496443 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496470 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496497 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496521 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496545 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496557 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496573 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496587 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496601 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496612 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496625 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496644 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496657 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.496670 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.498803 4730 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.498861 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.498881 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.498894 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.498908 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.498921 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.498937 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.498950 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.498969 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.498981 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.498994 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499008 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499019 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499033 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499045 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499057 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499067 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499078 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499092 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499103 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499115 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499133 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499146 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499161 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499174 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499186 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499239 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499269 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499280 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499292 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499306 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499323 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499333 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499345 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499357 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499369 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499385 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499396 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499411 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499426 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499439 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499455 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499482 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499494 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499540 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499554 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499589 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499601 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499613 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499627 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499639 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499651 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499662 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499695 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499781 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499797 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499808 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499821 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499832 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499844 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499878 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499890 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499923 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499933 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499974 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.499988 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.500001 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.500012 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.500024 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.500034 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.500177 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.500188 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.500201 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.500212 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.500223 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.500234 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.500271 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.500283 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.500319 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.500330 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.500346 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.500360 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.500374 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.500388 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.500399 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.500415 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.500429 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.500441 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.500454 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.500467 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.500479 4730 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.500490 4730 reconstruct.go:97] "Volume reconstruction finished" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.500500 4730 reconciler.go:26] "Reconciler: start to sync state" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.508794 4730 manager.go:324] Recovery completed Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.521830 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.523859 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.524098 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.524111 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.526517 4730 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.526549 4730 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.526576 4730 state_mem.go:36] "Initialized new in-memory state store" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.529073 4730 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.531749 4730 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.531793 4730 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.531831 4730 kubelet.go:2335] "Starting kubelet main sync loop" Mar 20 15:39:01 crc kubenswrapper[4730]: E0320 15:39:01.531989 4730 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 15:39:01 crc kubenswrapper[4730]: W0320 15:39:01.533974 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Mar 20 15:39:01 crc kubenswrapper[4730]: E0320 15:39:01.534062 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.545233 4730 policy_none.go:49] "None policy: Start" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.546616 4730 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.546654 4730 state_mem.go:35] "Initializing new in-memory state store" Mar 20 15:39:01 crc kubenswrapper[4730]: E0320 15:39:01.569871 4730 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.599416 4730 manager.go:334] "Starting Device Plugin manager" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.599759 4730 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.600008 4730 server.go:79] "Starting device plugin registration server" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.601327 4730 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.601376 4730 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.601847 4730 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.601994 4730 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.602011 4730 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 15:39:01 crc kubenswrapper[4730]: E0320 15:39:01.608984 4730 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.632366 4730 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.632564 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.635111 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.635162 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.635173 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.635379 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.636060 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.636127 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.636868 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.636923 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.636940 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.637113 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.637152 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.637179 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.637199 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.637296 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.637329 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.640697 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.640734 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.640747 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.641536 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.641660 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.641746 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.642673 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.643030 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.643090 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.643570 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.643599 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.643610 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.643753 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.643887 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.643902 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.643967 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.643935 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.644031 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.644344 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.644374 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.644387 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.644528 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.644557 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.644657 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.644679 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.644689 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.645155 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.645182 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.645192 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:01 crc kubenswrapper[4730]: E0320 15:39:01.671196 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="400ms" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.701551 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.702857 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.702907 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.702922 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.702958 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.703358 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.703404 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.703434 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.703464 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.703497 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.703603 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: E0320 15:39:01.703588 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.162:6443: connect: connection refused" node="crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.703646 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.703717 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.703738 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.703757 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.703777 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.703795 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.703812 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.703830 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.703848 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.805474 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.805573 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.805620 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.805656 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.805666 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.805754 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.805782 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.805760 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.805818 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.805844 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.805813 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.805903 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.805920 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.805858 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.805888 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.805868 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.806075 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.806107 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.806138 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.806205 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.806212 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.806231 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.806235 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.806272 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.806349 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.806352 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.806308 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.806327 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.806417 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.806505 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.904470 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.905872 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.905923 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.905935 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.905960 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 15:39:01 crc kubenswrapper[4730]: E0320 15:39:01.906463 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.162:6443: connect: connection refused" node="crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.968583 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.989209 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 15:39:01 crc kubenswrapper[4730]: I0320 15:39:01.993909 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 20 15:39:02 crc kubenswrapper[4730]: I0320 15:39:02.013886 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:39:02 crc kubenswrapper[4730]: I0320 15:39:02.018196 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 15:39:02 crc kubenswrapper[4730]: W0320 15:39:02.024528 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-172e91192de34734b9c2e8902af3d4b109d0533e261f0b7ed31be05be88e4e78 WatchSource:0}: Error finding container 172e91192de34734b9c2e8902af3d4b109d0533e261f0b7ed31be05be88e4e78: Status 404 returned error can't find the container with id 172e91192de34734b9c2e8902af3d4b109d0533e261f0b7ed31be05be88e4e78 Mar 20 15:39:02 crc kubenswrapper[4730]: W0320 15:39:02.034655 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-bd426064ccccf15a8df4852a4b40519cf64976fc7ebe6bc291d251d0a7197c4d WatchSource:0}: Error finding container bd426064ccccf15a8df4852a4b40519cf64976fc7ebe6bc291d251d0a7197c4d: Status 404 returned error can't find the container with id bd426064ccccf15a8df4852a4b40519cf64976fc7ebe6bc291d251d0a7197c4d Mar 20 15:39:02 crc kubenswrapper[4730]: W0320 15:39:02.042031 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-3ed0cd305128c289c3df06620362422df1b61abcaa253a3b7f2c1127abf63e06 WatchSource:0}: Error finding container 3ed0cd305128c289c3df06620362422df1b61abcaa253a3b7f2c1127abf63e06: Status 404 returned error can't find the container with id 3ed0cd305128c289c3df06620362422df1b61abcaa253a3b7f2c1127abf63e06 Mar 20 15:39:02 crc kubenswrapper[4730]: W0320 15:39:02.045585 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-77c3d7fd2c641d481cfce2566ad31514c958b204f1bd837dc5ec076235f799f6 WatchSource:0}: Error finding container 77c3d7fd2c641d481cfce2566ad31514c958b204f1bd837dc5ec076235f799f6: Status 404 returned error can't find the container with id 77c3d7fd2c641d481cfce2566ad31514c958b204f1bd837dc5ec076235f799f6 Mar 20 15:39:02 crc kubenswrapper[4730]: E0320 15:39:02.072780 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="800ms" Mar 20 15:39:02 crc kubenswrapper[4730]: I0320 15:39:02.306611 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:02 crc kubenswrapper[4730]: I0320 15:39:02.308864 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:02 crc kubenswrapper[4730]: I0320 15:39:02.308927 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:02 crc kubenswrapper[4730]: I0320 15:39:02.308952 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:02 crc kubenswrapper[4730]: I0320 15:39:02.309016 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 15:39:02 crc kubenswrapper[4730]: E0320 15:39:02.309576 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.162:6443: connect: connection refused" node="crc" Mar 20 15:39:02 crc kubenswrapper[4730]: I0320 15:39:02.465786 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Mar 20 15:39:02 crc kubenswrapper[4730]: W0320 15:39:02.508498 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Mar 20 15:39:02 crc kubenswrapper[4730]: E0320 15:39:02.508572 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Mar 20 15:39:02 crc kubenswrapper[4730]: W0320 15:39:02.526361 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Mar 20 15:39:02 crc kubenswrapper[4730]: E0320 15:39:02.526401 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Mar 20 15:39:02 crc kubenswrapper[4730]: I0320 15:39:02.537764 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bd426064ccccf15a8df4852a4b40519cf64976fc7ebe6bc291d251d0a7197c4d"} Mar 20 15:39:02 crc kubenswrapper[4730]: I0320 15:39:02.538858 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"172e91192de34734b9c2e8902af3d4b109d0533e261f0b7ed31be05be88e4e78"} Mar 20 15:39:02 crc kubenswrapper[4730]: I0320 15:39:02.539856 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"77c3d7fd2c641d481cfce2566ad31514c958b204f1bd837dc5ec076235f799f6"} Mar 20 15:39:02 crc kubenswrapper[4730]: I0320 15:39:02.540653 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3ed0cd305128c289c3df06620362422df1b61abcaa253a3b7f2c1127abf63e06"} Mar 20 15:39:02 crc kubenswrapper[4730]: I0320 15:39:02.541377 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"fd13fe13babdf950576c8680a07e2b41551f544e08704402c8c31a760d4d230c"} Mar 20 15:39:02 crc kubenswrapper[4730]: W0320 15:39:02.575386 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Mar 20 15:39:02 crc kubenswrapper[4730]: E0320 15:39:02.575500 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Mar 20 15:39:02 crc kubenswrapper[4730]: E0320 15:39:02.873830 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="1.6s" Mar 20 15:39:03 crc kubenswrapper[4730]: W0320 15:39:03.070515 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Mar 20 15:39:03 crc kubenswrapper[4730]: E0320 15:39:03.070617 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.110001 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.111178 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.111222 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.111236 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.111284 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 15:39:03 crc kubenswrapper[4730]: E0320 15:39:03.111822 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.162:6443: connect: connection refused" node="crc" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.465160 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.469242 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 20 15:39:03 crc kubenswrapper[4730]: E0320 15:39:03.470065 4730 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.548691 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.548681 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"899dbd6715433cfe5141851019e164daea952552c26706648245fd6319168685"} Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.548822 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"99093fe46696a888b221d24d1b42226d0ff16bab6b3fb2a718d055cf97066a69"} Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.548846 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8b34522460ebd4556ce4291e5c5132788387cf45b0be3b9535af9262948b71ac"} Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.548857 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1aee2dcf43ecf6df4a1615aa6e468921053ccb529d3c6dbc2c2ad641e264e606"} Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.549873 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.549957 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.549985 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.550977 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e" exitCode=0 Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.551093 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e"} Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.551211 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.552422 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.552454 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.552464 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.553225 4730 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f" exitCode=0 Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.553316 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.553317 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f"} Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.553972 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.554138 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.554182 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.554200 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.554706 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.554752 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.554766 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.556204 4730 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2" exitCode=0 Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.556229 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2"} Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.556299 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.557132 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.557195 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.557223 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.558542 4730 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="2e8cd87f56c4a70c698163de2d3f364420981943d389a3cc9b64401bb5fbf08e" exitCode=0 Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.558577 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"2e8cd87f56c4a70c698163de2d3f364420981943d389a3cc9b64401bb5fbf08e"} Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.558651 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.560224 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.560259 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:03 crc kubenswrapper[4730]: I0320 15:39:03.560268 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:04 crc kubenswrapper[4730]: W0320 15:39:04.394077 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Mar 20 15:39:04 crc kubenswrapper[4730]: E0320 15:39:04.394191 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.424788 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.466446 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Mar 20 15:39:04 crc kubenswrapper[4730]: E0320 15:39:04.475437 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="3.2s" Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.565325 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf"} Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.565371 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b"} Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.565385 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007"} Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.565395 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4"} Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.568161 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.568146 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3"} Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.568832 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.568863 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.568873 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.570023 4730 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c" exitCode=0 Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.570097 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c"} Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.570113 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.571113 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.571152 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.571166 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.573550 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.573642 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.573533 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f7f50a49e995c2647a19bd3dedd3ca85f1d7d0279df106c153af39641af9ea83"} Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.573747 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"badcec1b25a9d088fe7e563366ee7568adcabfe9c29a536db19fe3119b10f229"} Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.573769 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9a2006b33d30cf2fd57843f3df0fb087253dd116f48a4d807c31260ce7508b9e"} Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.574524 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.574555 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.574568 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.574857 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.574891 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.574900 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.712135 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.713516 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.713576 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.713592 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:04 crc kubenswrapper[4730]: I0320 15:39:04.713629 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 15:39:04 crc kubenswrapper[4730]: E0320 15:39:04.714421 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.162:6443: connect: connection refused" node="crc" Mar 20 15:39:04 crc kubenswrapper[4730]: W0320 15:39:04.920397 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Mar 20 15:39:04 crc kubenswrapper[4730]: E0320 15:39:04.920483 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Mar 20 15:39:05 crc kubenswrapper[4730]: E0320 15:39:05.295409 4730 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.162:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189e96d438fabf17 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.463744279 +0000 UTC m=+0.677115668,LastTimestamp:2026-03-20 15:39:01.463744279 +0000 UTC m=+0.677115668,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:05 crc kubenswrapper[4730]: I0320 15:39:05.591126 4730 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506" exitCode=0 Mar 20 15:39:05 crc kubenswrapper[4730]: I0320 15:39:05.591205 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506"} Mar 20 15:39:05 crc kubenswrapper[4730]: I0320 15:39:05.591318 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:05 crc kubenswrapper[4730]: I0320 15:39:05.592889 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:05 crc kubenswrapper[4730]: I0320 15:39:05.592929 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:05 crc kubenswrapper[4730]: I0320 15:39:05.592940 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:05 crc kubenswrapper[4730]: I0320 15:39:05.595704 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 15:39:05 crc kubenswrapper[4730]: I0320 15:39:05.595746 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:05 crc kubenswrapper[4730]: I0320 15:39:05.596237 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:05 crc kubenswrapper[4730]: I0320 15:39:05.596587 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"de82f104aff72ff62f6c5387f4f4d337127c7abc347bf4bf1df18031dc2cf509"} Mar 20 15:39:05 crc kubenswrapper[4730]: I0320 15:39:05.596693 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:05 crc kubenswrapper[4730]: I0320 15:39:05.597042 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:05 crc kubenswrapper[4730]: I0320 15:39:05.597841 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:05 crc kubenswrapper[4730]: I0320 15:39:05.597869 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:05 crc kubenswrapper[4730]: I0320 15:39:05.597896 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:05 crc kubenswrapper[4730]: I0320 15:39:05.598358 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:05 crc kubenswrapper[4730]: I0320 15:39:05.598378 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:05 crc kubenswrapper[4730]: I0320 15:39:05.598386 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:05 crc kubenswrapper[4730]: I0320 15:39:05.598770 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:05 crc kubenswrapper[4730]: I0320 15:39:05.598790 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:05 crc kubenswrapper[4730]: I0320 15:39:05.598799 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:05 crc kubenswrapper[4730]: I0320 15:39:05.598998 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:05 crc kubenswrapper[4730]: I0320 15:39:05.599057 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:05 crc kubenswrapper[4730]: I0320 15:39:05.599079 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:06 crc kubenswrapper[4730]: I0320 15:39:06.019443 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 15:39:06 crc kubenswrapper[4730]: I0320 15:39:06.602410 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:06 crc kubenswrapper[4730]: I0320 15:39:06.602533 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3"} Mar 20 15:39:06 crc kubenswrapper[4730]: I0320 15:39:06.602594 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619"} Mar 20 15:39:06 crc kubenswrapper[4730]: I0320 15:39:06.602623 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a"} Mar 20 15:39:06 crc kubenswrapper[4730]: I0320 15:39:06.602673 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:06 crc kubenswrapper[4730]: I0320 15:39:06.602880 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:39:06 crc kubenswrapper[4730]: I0320 15:39:06.603785 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:06 crc kubenswrapper[4730]: I0320 15:39:06.603815 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:06 crc kubenswrapper[4730]: I0320 15:39:06.603825 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:06 crc kubenswrapper[4730]: I0320 15:39:06.603903 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:06 crc kubenswrapper[4730]: I0320 15:39:06.603968 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:06 crc kubenswrapper[4730]: I0320 15:39:06.603986 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:07 crc kubenswrapper[4730]: I0320 15:39:07.290518 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 15:39:07 crc kubenswrapper[4730]: I0320 15:39:07.290684 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:07 crc kubenswrapper[4730]: I0320 15:39:07.291999 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:07 crc kubenswrapper[4730]: I0320 15:39:07.292034 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:07 crc kubenswrapper[4730]: I0320 15:39:07.292047 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:07 crc kubenswrapper[4730]: I0320 15:39:07.425358 4730 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 15:39:07 crc kubenswrapper[4730]: I0320 15:39:07.425457 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 15:39:07 crc kubenswrapper[4730]: I0320 15:39:07.481496 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 20 15:39:07 crc kubenswrapper[4730]: I0320 15:39:07.610814 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:07 crc kubenswrapper[4730]: I0320 15:39:07.611568 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:07 crc kubenswrapper[4730]: I0320 15:39:07.612041 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684"} Mar 20 15:39:07 crc kubenswrapper[4730]: I0320 15:39:07.612100 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa"} Mar 20 15:39:07 crc kubenswrapper[4730]: I0320 15:39:07.612609 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:07 crc kubenswrapper[4730]: I0320 15:39:07.612640 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:07 crc kubenswrapper[4730]: I0320 15:39:07.612651 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:07 crc kubenswrapper[4730]: I0320 15:39:07.612616 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:07 crc kubenswrapper[4730]: I0320 15:39:07.612698 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:07 crc kubenswrapper[4730]: I0320 15:39:07.612724 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:07 crc kubenswrapper[4730]: I0320 15:39:07.915071 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:07 crc kubenswrapper[4730]: I0320 15:39:07.921211 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:07 crc kubenswrapper[4730]: I0320 15:39:07.921274 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:07 crc kubenswrapper[4730]: I0320 15:39:07.921284 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:07 crc kubenswrapper[4730]: I0320 15:39:07.921309 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 15:39:08 crc kubenswrapper[4730]: I0320 15:39:08.613907 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:08 crc kubenswrapper[4730]: I0320 15:39:08.614663 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:08 crc kubenswrapper[4730]: I0320 15:39:08.614697 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:08 crc kubenswrapper[4730]: I0320 15:39:08.614711 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:09 crc kubenswrapper[4730]: I0320 15:39:09.241891 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:39:09 crc kubenswrapper[4730]: I0320 15:39:09.242116 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:09 crc kubenswrapper[4730]: I0320 15:39:09.243203 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:09 crc kubenswrapper[4730]: I0320 15:39:09.243293 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:09 crc kubenswrapper[4730]: I0320 15:39:09.243324 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:10 crc kubenswrapper[4730]: I0320 15:39:10.343954 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Mar 20 15:39:10 crc kubenswrapper[4730]: I0320 15:39:10.344322 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:10 crc kubenswrapper[4730]: I0320 15:39:10.345732 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:10 crc kubenswrapper[4730]: I0320 15:39:10.345776 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:10 crc kubenswrapper[4730]: I0320 15:39:10.345788 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:10 crc kubenswrapper[4730]: I0320 15:39:10.380777 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:39:10 crc kubenswrapper[4730]: I0320 15:39:10.380984 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:10 crc kubenswrapper[4730]: I0320 15:39:10.382416 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:10 crc kubenswrapper[4730]: I0320 15:39:10.382456 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:10 crc kubenswrapper[4730]: I0320 15:39:10.382465 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:10 crc kubenswrapper[4730]: I0320 15:39:10.652183 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 15:39:10 crc kubenswrapper[4730]: I0320 15:39:10.652409 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:10 crc kubenswrapper[4730]: I0320 15:39:10.653480 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:10 crc kubenswrapper[4730]: I0320 15:39:10.653716 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:10 crc kubenswrapper[4730]: I0320 15:39:10.653750 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:11 crc kubenswrapper[4730]: E0320 15:39:11.609084 4730 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 15:39:12 crc kubenswrapper[4730]: I0320 15:39:12.119094 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 15:39:12 crc kubenswrapper[4730]: I0320 15:39:12.119313 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:12 crc kubenswrapper[4730]: I0320 15:39:12.120558 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:12 crc kubenswrapper[4730]: I0320 15:39:12.120602 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:12 crc kubenswrapper[4730]: I0320 15:39:12.120612 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:12 crc kubenswrapper[4730]: I0320 15:39:12.123331 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 15:39:12 crc kubenswrapper[4730]: I0320 15:39:12.252825 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Mar 20 15:39:12 crc kubenswrapper[4730]: I0320 15:39:12.253011 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:12 crc kubenswrapper[4730]: I0320 15:39:12.254044 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:12 crc kubenswrapper[4730]: I0320 15:39:12.254072 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:12 crc kubenswrapper[4730]: I0320 15:39:12.254081 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:12 crc kubenswrapper[4730]: I0320 15:39:12.623959 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:12 crc kubenswrapper[4730]: I0320 15:39:12.624987 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:12 crc kubenswrapper[4730]: I0320 15:39:12.625020 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:12 crc kubenswrapper[4730]: I0320 15:39:12.625033 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:12 crc kubenswrapper[4730]: I0320 15:39:12.628606 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 15:39:13 crc kubenswrapper[4730]: I0320 15:39:13.625699 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:13 crc kubenswrapper[4730]: I0320 15:39:13.626453 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:13 crc kubenswrapper[4730]: I0320 15:39:13.626490 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:13 crc kubenswrapper[4730]: I0320 15:39:13.626502 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:15 crc kubenswrapper[4730]: I0320 15:39:15.466413 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Mar 20 15:39:15 crc kubenswrapper[4730]: W0320 15:39:15.626639 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Mar 20 15:39:15 crc kubenswrapper[4730]: I0320 15:39:15.626751 4730 trace.go:236] Trace[2045556304]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Mar-2026 15:39:05.624) (total time: 10002ms): Mar 20 15:39:15 crc kubenswrapper[4730]: Trace[2045556304]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (15:39:15.626) Mar 20 15:39:15 crc kubenswrapper[4730]: Trace[2045556304]: [10.002049706s] [10.002049706s] END Mar 20 15:39:15 crc kubenswrapper[4730]: E0320 15:39:15.626777 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Mar 20 15:39:16 crc kubenswrapper[4730]: W0320 15:39:16.187828 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Mar 20 15:39:16 crc kubenswrapper[4730]: I0320 15:39:16.187973 4730 trace.go:236] Trace[2123123142]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Mar-2026 15:39:06.186) (total time: 10001ms): Mar 20 15:39:16 crc kubenswrapper[4730]: Trace[2123123142]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (15:39:16.187) Mar 20 15:39:16 crc kubenswrapper[4730]: Trace[2123123142]: [10.001919921s] [10.001919921s] END Mar 20 15:39:16 crc kubenswrapper[4730]: E0320 15:39:16.188032 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Mar 20 15:39:16 crc kubenswrapper[4730]: I0320 15:39:16.633802 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 20 15:39:16 crc kubenswrapper[4730]: I0320 15:39:16.637164 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="de82f104aff72ff62f6c5387f4f4d337127c7abc347bf4bf1df18031dc2cf509" exitCode=255 Mar 20 15:39:16 crc kubenswrapper[4730]: I0320 15:39:16.637241 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"de82f104aff72ff62f6c5387f4f4d337127c7abc347bf4bf1df18031dc2cf509"} Mar 20 15:39:16 crc kubenswrapper[4730]: I0320 15:39:16.637593 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:16 crc kubenswrapper[4730]: I0320 15:39:16.639463 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:16 crc kubenswrapper[4730]: I0320 15:39:16.639513 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:16 crc kubenswrapper[4730]: I0320 15:39:16.639529 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:16 crc kubenswrapper[4730]: I0320 15:39:16.640458 4730 scope.go:117] "RemoveContainer" containerID="de82f104aff72ff62f6c5387f4f4d337127c7abc347bf4bf1df18031dc2cf509" Mar 20 15:39:17 crc kubenswrapper[4730]: E0320 15:39:17.008822 4730 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:17Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 15:39:17 crc kubenswrapper[4730]: W0320 15:39:17.010345 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:17Z is after 2026-02-23T05:33:13Z Mar 20 15:39:17 crc kubenswrapper[4730]: E0320 15:39:17.010409 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:17Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 15:39:17 crc kubenswrapper[4730]: E0320 15:39:17.010749 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:17Z is after 2026-02-23T05:33:13Z" node="crc" Mar 20 15:39:17 crc kubenswrapper[4730]: I0320 15:39:17.010957 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:17Z is after 2026-02-23T05:33:13Z Mar 20 15:39:17 crc kubenswrapper[4730]: E0320 15:39:17.012969 4730 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:17Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189e96d438fabf17 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.463744279 +0000 UTC m=+0.677115668,LastTimestamp:2026-03-20 15:39:01.463744279 +0000 UTC m=+0.677115668,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:17 crc kubenswrapper[4730]: E0320 15:39:17.013155 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:17Z is after 2026-02-23T05:33:13Z" interval="6.4s" Mar 20 15:39:17 crc kubenswrapper[4730]: I0320 15:39:17.013944 4730 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 20 15:39:17 crc kubenswrapper[4730]: I0320 15:39:17.013989 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 20 15:39:17 crc kubenswrapper[4730]: W0320 15:39:17.016406 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:17Z is after 2026-02-23T05:33:13Z Mar 20 15:39:17 crc kubenswrapper[4730]: E0320 15:39:17.016470 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:17Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 15:39:17 crc kubenswrapper[4730]: I0320 15:39:17.018008 4730 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Mar 20 15:39:17 crc kubenswrapper[4730]: I0320 15:39:17.018073 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 20 15:39:17 crc kubenswrapper[4730]: I0320 15:39:17.426897 4730 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 15:39:17 crc kubenswrapper[4730]: I0320 15:39:17.427033 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 15:39:17 crc kubenswrapper[4730]: I0320 15:39:17.469898 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:17Z is after 2026-02-23T05:33:13Z Mar 20 15:39:17 crc kubenswrapper[4730]: I0320 15:39:17.641744 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 20 15:39:17 crc kubenswrapper[4730]: I0320 15:39:17.643842 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"23136ed264fc1e174d9978df43b71e3437a1217258a6e710b82d8c27d1478149"} Mar 20 15:39:17 crc kubenswrapper[4730]: I0320 15:39:17.643986 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:17 crc kubenswrapper[4730]: I0320 15:39:17.644994 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:17 crc kubenswrapper[4730]: I0320 15:39:17.645025 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:17 crc kubenswrapper[4730]: I0320 15:39:17.645038 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:18 crc kubenswrapper[4730]: I0320 15:39:18.469295 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:18Z is after 2026-02-23T05:33:13Z Mar 20 15:39:18 crc kubenswrapper[4730]: I0320 15:39:18.648477 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 20 15:39:18 crc kubenswrapper[4730]: I0320 15:39:18.648897 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 20 15:39:18 crc kubenswrapper[4730]: I0320 15:39:18.650472 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="23136ed264fc1e174d9978df43b71e3437a1217258a6e710b82d8c27d1478149" exitCode=255 Mar 20 15:39:18 crc kubenswrapper[4730]: I0320 15:39:18.650539 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"23136ed264fc1e174d9978df43b71e3437a1217258a6e710b82d8c27d1478149"} Mar 20 15:39:18 crc kubenswrapper[4730]: I0320 15:39:18.650591 4730 scope.go:117] "RemoveContainer" containerID="de82f104aff72ff62f6c5387f4f4d337127c7abc347bf4bf1df18031dc2cf509" Mar 20 15:39:18 crc kubenswrapper[4730]: I0320 15:39:18.650859 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:18 crc kubenswrapper[4730]: I0320 15:39:18.652211 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:18 crc kubenswrapper[4730]: I0320 15:39:18.652352 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:18 crc kubenswrapper[4730]: I0320 15:39:18.652448 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:18 crc kubenswrapper[4730]: I0320 15:39:18.653054 4730 scope.go:117] "RemoveContainer" containerID="23136ed264fc1e174d9978df43b71e3437a1217258a6e710b82d8c27d1478149" Mar 20 15:39:18 crc kubenswrapper[4730]: E0320 15:39:18.653363 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 15:39:19 crc kubenswrapper[4730]: I0320 15:39:19.468422 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:19Z is after 2026-02-23T05:33:13Z Mar 20 15:39:19 crc kubenswrapper[4730]: I0320 15:39:19.658174 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 20 15:39:19 crc kubenswrapper[4730]: W0320 15:39:19.667306 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:19Z is after 2026-02-23T05:33:13Z Mar 20 15:39:19 crc kubenswrapper[4730]: E0320 15:39:19.667384 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:19Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 15:39:20 crc kubenswrapper[4730]: I0320 15:39:20.389851 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Mar 20 15:39:20 crc kubenswrapper[4730]: I0320 15:39:20.390173 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:20 crc kubenswrapper[4730]: I0320 15:39:20.392127 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:20 crc kubenswrapper[4730]: I0320 15:39:20.392179 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:20 crc kubenswrapper[4730]: I0320 15:39:20.392201 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:20 crc kubenswrapper[4730]: I0320 15:39:20.394426 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:39:20 crc kubenswrapper[4730]: I0320 15:39:20.394796 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:20 crc kubenswrapper[4730]: I0320 15:39:20.397274 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:20 crc kubenswrapper[4730]: I0320 15:39:20.397460 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:20 crc kubenswrapper[4730]: I0320 15:39:20.397600 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:20 crc kubenswrapper[4730]: I0320 15:39:20.398738 4730 scope.go:117] "RemoveContainer" containerID="23136ed264fc1e174d9978df43b71e3437a1217258a6e710b82d8c27d1478149" Mar 20 15:39:20 crc kubenswrapper[4730]: E0320 15:39:20.399141 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 15:39:20 crc kubenswrapper[4730]: I0320 15:39:20.401961 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:39:20 crc kubenswrapper[4730]: I0320 15:39:20.414810 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Mar 20 15:39:20 crc kubenswrapper[4730]: I0320 15:39:20.471687 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:20Z is after 2026-02-23T05:33:13Z Mar 20 15:39:20 crc kubenswrapper[4730]: I0320 15:39:20.666089 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:20 crc kubenswrapper[4730]: I0320 15:39:20.666101 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:20 crc kubenswrapper[4730]: I0320 15:39:20.667932 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:20 crc kubenswrapper[4730]: I0320 15:39:20.668045 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:20 crc kubenswrapper[4730]: I0320 15:39:20.668068 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:20 crc kubenswrapper[4730]: I0320 15:39:20.668725 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:20 crc kubenswrapper[4730]: I0320 15:39:20.668786 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:20 crc kubenswrapper[4730]: I0320 15:39:20.668803 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:20 crc kubenswrapper[4730]: I0320 15:39:20.669192 4730 scope.go:117] "RemoveContainer" containerID="23136ed264fc1e174d9978df43b71e3437a1217258a6e710b82d8c27d1478149" Mar 20 15:39:20 crc kubenswrapper[4730]: E0320 15:39:20.669588 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 15:39:21 crc kubenswrapper[4730]: I0320 15:39:21.468278 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:21Z is after 2026-02-23T05:33:13Z Mar 20 15:39:21 crc kubenswrapper[4730]: E0320 15:39:21.609222 4730 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 15:39:21 crc kubenswrapper[4730]: W0320 15:39:21.947945 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:21Z is after 2026-02-23T05:33:13Z Mar 20 15:39:21 crc kubenswrapper[4730]: E0320 15:39:21.948047 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:21Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 15:39:22 crc kubenswrapper[4730]: I0320 15:39:22.469480 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:22Z is after 2026-02-23T05:33:13Z Mar 20 15:39:22 crc kubenswrapper[4730]: I0320 15:39:22.865796 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:39:22 crc kubenswrapper[4730]: I0320 15:39:22.866179 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:22 crc kubenswrapper[4730]: I0320 15:39:22.867891 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:22 crc kubenswrapper[4730]: I0320 15:39:22.867927 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:22 crc kubenswrapper[4730]: I0320 15:39:22.867939 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:22 crc kubenswrapper[4730]: I0320 15:39:22.868712 4730 scope.go:117] "RemoveContainer" containerID="23136ed264fc1e174d9978df43b71e3437a1217258a6e710b82d8c27d1478149" Mar 20 15:39:22 crc kubenswrapper[4730]: E0320 15:39:22.868951 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 15:39:23 crc kubenswrapper[4730]: I0320 15:39:23.411144 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:23 crc kubenswrapper[4730]: I0320 15:39:23.412460 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:23 crc kubenswrapper[4730]: I0320 15:39:23.412566 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:23 crc kubenswrapper[4730]: I0320 15:39:23.412590 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:23 crc kubenswrapper[4730]: I0320 15:39:23.412634 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 15:39:23 crc kubenswrapper[4730]: E0320 15:39:23.415561 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:23Z is after 2026-02-23T05:33:13Z" node="crc" Mar 20 15:39:23 crc kubenswrapper[4730]: E0320 15:39:23.417664 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:23Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 20 15:39:23 crc kubenswrapper[4730]: I0320 15:39:23.468095 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:23Z is after 2026-02-23T05:33:13Z Mar 20 15:39:23 crc kubenswrapper[4730]: I0320 15:39:23.967280 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:39:23 crc kubenswrapper[4730]: I0320 15:39:23.967501 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:23 crc kubenswrapper[4730]: I0320 15:39:23.968934 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:23 crc kubenswrapper[4730]: I0320 15:39:23.968998 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:23 crc kubenswrapper[4730]: I0320 15:39:23.969014 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:23 crc kubenswrapper[4730]: I0320 15:39:23.969775 4730 scope.go:117] "RemoveContainer" containerID="23136ed264fc1e174d9978df43b71e3437a1217258a6e710b82d8c27d1478149" Mar 20 15:39:23 crc kubenswrapper[4730]: E0320 15:39:23.970010 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 15:39:24 crc kubenswrapper[4730]: I0320 15:39:24.468193 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:24Z is after 2026-02-23T05:33:13Z Mar 20 15:39:25 crc kubenswrapper[4730]: I0320 15:39:25.386359 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 20 15:39:25 crc kubenswrapper[4730]: E0320 15:39:25.389333 4730 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:25Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 15:39:25 crc kubenswrapper[4730]: I0320 15:39:25.471152 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:25Z is after 2026-02-23T05:33:13Z Mar 20 15:39:26 crc kubenswrapper[4730]: W0320 15:39:26.217493 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:26Z is after 2026-02-23T05:33:13Z Mar 20 15:39:26 crc kubenswrapper[4730]: E0320 15:39:26.217592 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:26Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 15:39:26 crc kubenswrapper[4730]: W0320 15:39:26.264420 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:26Z is after 2026-02-23T05:33:13Z Mar 20 15:39:26 crc kubenswrapper[4730]: E0320 15:39:26.264511 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:26Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 15:39:26 crc kubenswrapper[4730]: I0320 15:39:26.468195 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:26Z is after 2026-02-23T05:33:13Z Mar 20 15:39:27 crc kubenswrapper[4730]: E0320 15:39:27.017791 4730 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:27Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189e96d438fabf17 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.463744279 +0000 UTC m=+0.677115668,LastTimestamp:2026-03-20 15:39:01.463744279 +0000 UTC m=+0.677115668,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:27 crc kubenswrapper[4730]: I0320 15:39:27.425353 4730 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 15:39:27 crc kubenswrapper[4730]: I0320 15:39:27.425544 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 15:39:27 crc kubenswrapper[4730]: I0320 15:39:27.425656 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 15:39:27 crc kubenswrapper[4730]: I0320 15:39:27.425903 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:27 crc kubenswrapper[4730]: I0320 15:39:27.427938 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:27 crc kubenswrapper[4730]: I0320 15:39:27.428012 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:27 crc kubenswrapper[4730]: I0320 15:39:27.428031 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:27 crc kubenswrapper[4730]: I0320 15:39:27.432695 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"8b34522460ebd4556ce4291e5c5132788387cf45b0be3b9535af9262948b71ac"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 20 15:39:27 crc kubenswrapper[4730]: I0320 15:39:27.433773 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://8b34522460ebd4556ce4291e5c5132788387cf45b0be3b9535af9262948b71ac" gracePeriod=30 Mar 20 15:39:27 crc kubenswrapper[4730]: I0320 15:39:27.467944 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:27Z is after 2026-02-23T05:33:13Z Mar 20 15:39:27 crc kubenswrapper[4730]: W0320 15:39:27.622772 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:27Z is after 2026-02-23T05:33:13Z Mar 20 15:39:27 crc kubenswrapper[4730]: E0320 15:39:27.622913 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:27Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 15:39:27 crc kubenswrapper[4730]: I0320 15:39:27.687810 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 20 15:39:27 crc kubenswrapper[4730]: I0320 15:39:27.688504 4730 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="8b34522460ebd4556ce4291e5c5132788387cf45b0be3b9535af9262948b71ac" exitCode=255 Mar 20 15:39:27 crc kubenswrapper[4730]: I0320 15:39:27.688595 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"8b34522460ebd4556ce4291e5c5132788387cf45b0be3b9535af9262948b71ac"} Mar 20 15:39:28 crc kubenswrapper[4730]: I0320 15:39:28.470860 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:28Z is after 2026-02-23T05:33:13Z Mar 20 15:39:28 crc kubenswrapper[4730]: I0320 15:39:28.694674 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 20 15:39:28 crc kubenswrapper[4730]: I0320 15:39:28.695305 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"620070760ce503ee2102ce0880913637feb032124892ce1a1e2060939f38e050"} Mar 20 15:39:28 crc kubenswrapper[4730]: I0320 15:39:28.695523 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:28 crc kubenswrapper[4730]: I0320 15:39:28.696679 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:28 crc kubenswrapper[4730]: I0320 15:39:28.696714 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:28 crc kubenswrapper[4730]: I0320 15:39:28.696727 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:29 crc kubenswrapper[4730]: I0320 15:39:29.471499 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:29Z is after 2026-02-23T05:33:13Z Mar 20 15:39:29 crc kubenswrapper[4730]: I0320 15:39:29.697819 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:29 crc kubenswrapper[4730]: I0320 15:39:29.699036 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:29 crc kubenswrapper[4730]: I0320 15:39:29.699104 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:29 crc kubenswrapper[4730]: I0320 15:39:29.699124 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:30 crc kubenswrapper[4730]: I0320 15:39:30.416372 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:30 crc kubenswrapper[4730]: I0320 15:39:30.418234 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:30 crc kubenswrapper[4730]: I0320 15:39:30.418332 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:30 crc kubenswrapper[4730]: I0320 15:39:30.418348 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:30 crc kubenswrapper[4730]: I0320 15:39:30.418393 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 15:39:30 crc kubenswrapper[4730]: E0320 15:39:30.421222 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:30Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 20 15:39:30 crc kubenswrapper[4730]: E0320 15:39:30.421937 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:30Z is after 2026-02-23T05:33:13Z" node="crc" Mar 20 15:39:30 crc kubenswrapper[4730]: I0320 15:39:30.468544 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:30Z is after 2026-02-23T05:33:13Z Mar 20 15:39:30 crc kubenswrapper[4730]: I0320 15:39:30.652902 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 15:39:30 crc kubenswrapper[4730]: I0320 15:39:30.700597 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:30 crc kubenswrapper[4730]: I0320 15:39:30.701589 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:30 crc kubenswrapper[4730]: I0320 15:39:30.701797 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:30 crc kubenswrapper[4730]: I0320 15:39:30.701900 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:31 crc kubenswrapper[4730]: I0320 15:39:31.468687 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:31Z is after 2026-02-23T05:33:13Z Mar 20 15:39:31 crc kubenswrapper[4730]: E0320 15:39:31.609404 4730 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 15:39:32 crc kubenswrapper[4730]: W0320 15:39:32.187820 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:32Z is after 2026-02-23T05:33:13Z Mar 20 15:39:32 crc kubenswrapper[4730]: E0320 15:39:32.187970 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:32Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 15:39:32 crc kubenswrapper[4730]: I0320 15:39:32.467920 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:32Z is after 2026-02-23T05:33:13Z Mar 20 15:39:33 crc kubenswrapper[4730]: I0320 15:39:33.469432 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:33Z is after 2026-02-23T05:33:13Z Mar 20 15:39:34 crc kubenswrapper[4730]: I0320 15:39:34.425526 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 15:39:34 crc kubenswrapper[4730]: I0320 15:39:34.425679 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:34 crc kubenswrapper[4730]: I0320 15:39:34.426583 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:34 crc kubenswrapper[4730]: I0320 15:39:34.426614 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:34 crc kubenswrapper[4730]: I0320 15:39:34.426624 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:34 crc kubenswrapper[4730]: I0320 15:39:34.468610 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:34Z is after 2026-02-23T05:33:13Z Mar 20 15:39:35 crc kubenswrapper[4730]: I0320 15:39:35.468224 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:35Z is after 2026-02-23T05:33:13Z Mar 20 15:39:35 crc kubenswrapper[4730]: I0320 15:39:35.533339 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:35 crc kubenswrapper[4730]: I0320 15:39:35.534669 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:35 crc kubenswrapper[4730]: I0320 15:39:35.534752 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:35 crc kubenswrapper[4730]: I0320 15:39:35.534765 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:35 crc kubenswrapper[4730]: I0320 15:39:35.535289 4730 scope.go:117] "RemoveContainer" containerID="23136ed264fc1e174d9978df43b71e3437a1217258a6e710b82d8c27d1478149" Mar 20 15:39:36 crc kubenswrapper[4730]: I0320 15:39:36.468713 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:36Z is after 2026-02-23T05:33:13Z Mar 20 15:39:36 crc kubenswrapper[4730]: I0320 15:39:36.714963 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 20 15:39:36 crc kubenswrapper[4730]: I0320 15:39:36.715499 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 20 15:39:36 crc kubenswrapper[4730]: I0320 15:39:36.717550 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e2a38d11938583eb373fcd731b30daf31d61c71c2ace80683efb60769ab0b694" exitCode=255 Mar 20 15:39:36 crc kubenswrapper[4730]: I0320 15:39:36.717599 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e2a38d11938583eb373fcd731b30daf31d61c71c2ace80683efb60769ab0b694"} Mar 20 15:39:36 crc kubenswrapper[4730]: I0320 15:39:36.717639 4730 scope.go:117] "RemoveContainer" containerID="23136ed264fc1e174d9978df43b71e3437a1217258a6e710b82d8c27d1478149" Mar 20 15:39:36 crc kubenswrapper[4730]: I0320 15:39:36.717776 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:36 crc kubenswrapper[4730]: I0320 15:39:36.718620 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:36 crc kubenswrapper[4730]: I0320 15:39:36.718654 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:36 crc kubenswrapper[4730]: I0320 15:39:36.718668 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:36 crc kubenswrapper[4730]: I0320 15:39:36.719063 4730 scope.go:117] "RemoveContainer" containerID="e2a38d11938583eb373fcd731b30daf31d61c71c2ace80683efb60769ab0b694" Mar 20 15:39:36 crc kubenswrapper[4730]: E0320 15:39:36.719199 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 15:39:37 crc kubenswrapper[4730]: E0320 15:39:37.022868 4730 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:37Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189e96d438fabf17 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.463744279 +0000 UTC m=+0.677115668,LastTimestamp:2026-03-20 15:39:01.463744279 +0000 UTC m=+0.677115668,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:37 crc kubenswrapper[4730]: I0320 15:39:37.422043 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:37 crc kubenswrapper[4730]: I0320 15:39:37.423241 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:37 crc kubenswrapper[4730]: I0320 15:39:37.423304 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:37 crc kubenswrapper[4730]: I0320 15:39:37.423352 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:37 crc kubenswrapper[4730]: I0320 15:39:37.423379 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 15:39:37 crc kubenswrapper[4730]: E0320 15:39:37.424530 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:37Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 20 15:39:37 crc kubenswrapper[4730]: I0320 15:39:37.425594 4730 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 15:39:37 crc kubenswrapper[4730]: I0320 15:39:37.425664 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 15:39:37 crc kubenswrapper[4730]: E0320 15:39:37.426391 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:37Z is after 2026-02-23T05:33:13Z" node="crc" Mar 20 15:39:37 crc kubenswrapper[4730]: I0320 15:39:37.468177 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:37Z is after 2026-02-23T05:33:13Z Mar 20 15:39:37 crc kubenswrapper[4730]: I0320 15:39:37.723692 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 20 15:39:38 crc kubenswrapper[4730]: I0320 15:39:38.479889 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 15:39:39 crc kubenswrapper[4730]: I0320 15:39:39.473438 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 15:39:40 crc kubenswrapper[4730]: I0320 15:39:40.473407 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 15:39:41 crc kubenswrapper[4730]: I0320 15:39:41.471794 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 15:39:41 crc kubenswrapper[4730]: E0320 15:39:41.609938 4730 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 15:39:42 crc kubenswrapper[4730]: I0320 15:39:42.471668 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 15:39:42 crc kubenswrapper[4730]: I0320 15:39:42.801370 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 20 15:39:42 crc kubenswrapper[4730]: I0320 15:39:42.819667 4730 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 20 15:39:42 crc kubenswrapper[4730]: I0320 15:39:42.865149 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:39:42 crc kubenswrapper[4730]: I0320 15:39:42.865380 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:42 crc kubenswrapper[4730]: I0320 15:39:42.866457 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:42 crc kubenswrapper[4730]: I0320 15:39:42.866633 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:42 crc kubenswrapper[4730]: I0320 15:39:42.866713 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:42 crc kubenswrapper[4730]: I0320 15:39:42.867341 4730 scope.go:117] "RemoveContainer" containerID="e2a38d11938583eb373fcd731b30daf31d61c71c2ace80683efb60769ab0b694" Mar 20 15:39:42 crc kubenswrapper[4730]: E0320 15:39:42.867572 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 15:39:43 crc kubenswrapper[4730]: I0320 15:39:43.469500 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 15:39:43 crc kubenswrapper[4730]: I0320 15:39:43.966706 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:39:43 crc kubenswrapper[4730]: I0320 15:39:43.966969 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:43 crc kubenswrapper[4730]: I0320 15:39:43.968535 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:43 crc kubenswrapper[4730]: I0320 15:39:43.968581 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:43 crc kubenswrapper[4730]: I0320 15:39:43.968597 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:43 crc kubenswrapper[4730]: I0320 15:39:43.969272 4730 scope.go:117] "RemoveContainer" containerID="e2a38d11938583eb373fcd731b30daf31d61c71c2ace80683efb60769ab0b694" Mar 20 15:39:43 crc kubenswrapper[4730]: E0320 15:39:43.969500 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 15:39:44 crc kubenswrapper[4730]: I0320 15:39:44.426685 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:44 crc kubenswrapper[4730]: I0320 15:39:44.428417 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:44 crc kubenswrapper[4730]: I0320 15:39:44.428496 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:44 crc kubenswrapper[4730]: I0320 15:39:44.428521 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:44 crc kubenswrapper[4730]: I0320 15:39:44.428575 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 15:39:44 crc kubenswrapper[4730]: E0320 15:39:44.432690 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 20 15:39:44 crc kubenswrapper[4730]: E0320 15:39:44.433132 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 20 15:39:44 crc kubenswrapper[4730]: I0320 15:39:44.466460 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 15:39:45 crc kubenswrapper[4730]: W0320 15:39:45.282879 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 20 15:39:45 crc kubenswrapper[4730]: E0320 15:39:45.283735 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 20 15:39:45 crc kubenswrapper[4730]: I0320 15:39:45.466402 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 15:39:46 crc kubenswrapper[4730]: I0320 15:39:46.469323 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 15:39:47 crc kubenswrapper[4730]: W0320 15:39:47.014691 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.014737 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.028550 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e96d438fabf17 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.463744279 +0000 UTC m=+0.677115668,LastTimestamp:2026-03-20 15:39:01.463744279 +0000 UTC m=+0.677115668,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.032378 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e96d43c937c06 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.524085766 +0000 UTC m=+0.737457135,LastTimestamp:2026-03-20 15:39:01.524085766 +0000 UTC m=+0.737457135,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.036689 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e96d43c93c92e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.524105518 +0000 UTC m=+0.737476887,LastTimestamp:2026-03-20 15:39:01.524105518 +0000 UTC m=+0.737476887,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.041063 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e96d43c93f625 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.524117029 +0000 UTC m=+0.737488398,LastTimestamp:2026-03-20 15:39:01.524117029 +0000 UTC m=+0.737488398,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.044728 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e96d4413de3b1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.602362289 +0000 UTC m=+0.815733668,LastTimestamp:2026-03-20 15:39:01.602362289 +0000 UTC m=+0.815733668,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.049429 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e96d43c937c06\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e96d43c937c06 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.524085766 +0000 UTC m=+0.737457135,LastTimestamp:2026-03-20 15:39:01.635147837 +0000 UTC m=+0.848519206,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.055218 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e96d43c93c92e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e96d43c93c92e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.524105518 +0000 UTC m=+0.737476887,LastTimestamp:2026-03-20 15:39:01.635169658 +0000 UTC m=+0.848541027,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.058070 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e96d43c93f625\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e96d43c93f625 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.524117029 +0000 UTC m=+0.737488398,LastTimestamp:2026-03-20 15:39:01.635178349 +0000 UTC m=+0.848549718,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.059391 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e96d43c937c06\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e96d43c937c06 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.524085766 +0000 UTC m=+0.737457135,LastTimestamp:2026-03-20 15:39:01.636910564 +0000 UTC m=+0.850281933,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.064091 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e96d43c93c92e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e96d43c93c92e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.524105518 +0000 UTC m=+0.737476887,LastTimestamp:2026-03-20 15:39:01.636933695 +0000 UTC m=+0.850305064,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.068591 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e96d43c93f625\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e96d43c93f625 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.524117029 +0000 UTC m=+0.737488398,LastTimestamp:2026-03-20 15:39:01.636947536 +0000 UTC m=+0.850318905,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.072180 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e96d43c937c06\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e96d43c937c06 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.524085766 +0000 UTC m=+0.737457135,LastTimestamp:2026-03-20 15:39:01.637163852 +0000 UTC m=+0.850535251,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.075666 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e96d43c93c92e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e96d43c93c92e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.524105518 +0000 UTC m=+0.737476887,LastTimestamp:2026-03-20 15:39:01.637192864 +0000 UTC m=+0.850564253,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.078955 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e96d43c93f625\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e96d43c93f625 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.524117029 +0000 UTC m=+0.737488398,LastTimestamp:2026-03-20 15:39:01.637208855 +0000 UTC m=+0.850580244,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.081947 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e96d43c937c06\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e96d43c937c06 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.524085766 +0000 UTC m=+0.737457135,LastTimestamp:2026-03-20 15:39:01.640724649 +0000 UTC m=+0.854096018,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.085762 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e96d43c93c92e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e96d43c93c92e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.524105518 +0000 UTC m=+0.737476887,LastTimestamp:2026-03-20 15:39:01.64074046 +0000 UTC m=+0.854111829,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.088797 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e96d43c93f625\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e96d43c93f625 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.524117029 +0000 UTC m=+0.737488398,LastTimestamp:2026-03-20 15:39:01.640752911 +0000 UTC m=+0.854124270,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.092099 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e96d43c937c06\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e96d43c937c06 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.524085766 +0000 UTC m=+0.737457135,LastTimestamp:2026-03-20 15:39:01.641634175 +0000 UTC m=+0.855005544,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.095440 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e96d43c93c92e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e96d43c93c92e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.524105518 +0000 UTC m=+0.737476887,LastTimestamp:2026-03-20 15:39:01.641737922 +0000 UTC m=+0.855109291,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.098534 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e96d43c93f625\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e96d43c93f625 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.524117029 +0000 UTC m=+0.737488398,LastTimestamp:2026-03-20 15:39:01.642359627 +0000 UTC m=+0.855730996,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.102644 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e96d43c937c06\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e96d43c937c06 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.524085766 +0000 UTC m=+0.737457135,LastTimestamp:2026-03-20 15:39:01.643592886 +0000 UTC m=+0.856964255,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.106957 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e96d43c93c92e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e96d43c93c92e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.524105518 +0000 UTC m=+0.737476887,LastTimestamp:2026-03-20 15:39:01.643606567 +0000 UTC m=+0.856977926,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.111804 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e96d43c93f625\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e96d43c93f625 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.524117029 +0000 UTC m=+0.737488398,LastTimestamp:2026-03-20 15:39:01.643615728 +0000 UTC m=+0.856987097,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.115794 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e96d43c937c06\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e96d43c937c06 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.524085766 +0000 UTC m=+0.737457135,LastTimestamp:2026-03-20 15:39:01.64392597 +0000 UTC m=+0.857297339,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.119756 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e96d43c93c92e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e96d43c93c92e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:01.524105518 +0000 UTC m=+0.737476887,LastTimestamp:2026-03-20 15:39:01.644022117 +0000 UTC m=+0.857393486,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.125459 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e96d45af0db80 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:02.033521536 +0000 UTC m=+1.246892905,LastTimestamp:2026-03-20 15:39:02.033521536 +0000 UTC m=+1.246892905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.136958 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e96d45b419b97 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:02.038813591 +0000 UTC m=+1.252184950,LastTimestamp:2026-03-20 15:39:02.038813591 +0000 UTC m=+1.252184950,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.141723 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e96d45b747955 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:02.042147157 +0000 UTC m=+1.255518526,LastTimestamp:2026-03-20 15:39:02.042147157 +0000 UTC m=+1.255518526,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.145116 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e96d45b9b224f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:02.044680783 +0000 UTC m=+1.258052152,LastTimestamp:2026-03-20 15:39:02.044680783 +0000 UTC m=+1.258052152,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.149784 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e96d45bf4c1f5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:02.050554357 +0000 UTC m=+1.263925726,LastTimestamp:2026-03-20 15:39:02.050554357 +0000 UTC m=+1.263925726,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.153942 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e96d4806b763b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:02.662313531 +0000 UTC m=+1.875684900,LastTimestamp:2026-03-20 15:39:02.662313531 +0000 UTC m=+1.875684900,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.157667 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e96d4806b76bd openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:02.662313661 +0000 UTC m=+1.875685050,LastTimestamp:2026-03-20 15:39:02.662313661 +0000 UTC m=+1.875685050,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.159994 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e96d4815c756b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:02.678107499 +0000 UTC m=+1.891478868,LastTimestamp:2026-03-20 15:39:02.678107499 +0000 UTC m=+1.891478868,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.161636 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e96d48161ba15 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:02.678452757 +0000 UTC m=+1.891824136,LastTimestamp:2026-03-20 15:39:02.678452757 +0000 UTC m=+1.891824136,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.164872 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e96d48165b406 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:02.67871335 +0000 UTC m=+1.892084719,LastTimestamp:2026-03-20 15:39:02.67871335 +0000 UTC m=+1.892084719,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.168524 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e96d48165d116 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:02.67872079 +0000 UTC m=+1.892092159,LastTimestamp:2026-03-20 15:39:02.67872079 +0000 UTC m=+1.892092159,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.171761 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e96d481664cb0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:02.678752432 +0000 UTC m=+1.892123801,LastTimestamp:2026-03-20 15:39:02.678752432 +0000 UTC m=+1.892123801,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.175821 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e96d48179d95a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:02.680033626 +0000 UTC m=+1.893404995,LastTimestamp:2026-03-20 15:39:02.680033626 +0000 UTC m=+1.893404995,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.180224 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e96d4824a35c9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:02.693688777 +0000 UTC m=+1.907060156,LastTimestamp:2026-03-20 15:39:02.693688777 +0000 UTC m=+1.907060156,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.183674 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e96d4830348df openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:02.705817823 +0000 UTC m=+1.919189192,LastTimestamp:2026-03-20 15:39:02.705817823 +0000 UTC m=+1.919189192,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.187178 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e96d48309835d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:02.706226013 +0000 UTC m=+1.919597382,LastTimestamp:2026-03-20 15:39:02.706226013 +0000 UTC m=+1.919597382,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.190703 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e96d492a67a90 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:02.968171152 +0000 UTC m=+2.181542521,LastTimestamp:2026-03-20 15:39:02.968171152 +0000 UTC m=+2.181542521,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.193871 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e96d49342e742 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:02.978422594 +0000 UTC m=+2.191793963,LastTimestamp:2026-03-20 15:39:02.978422594 +0000 UTC m=+2.191793963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.197269 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e96d49354c19a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:02.979592602 +0000 UTC m=+2.192963971,LastTimestamp:2026-03-20 15:39:02.979592602 +0000 UTC m=+2.192963971,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.200630 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e96d49cae779d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:03.136466845 +0000 UTC m=+2.349838224,LastTimestamp:2026-03-20 15:39:03.136466845 +0000 UTC m=+2.349838224,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.203735 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e96d49d6b57a3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:03.148844963 +0000 UTC m=+2.362216342,LastTimestamp:2026-03-20 15:39:03.148844963 +0000 UTC m=+2.362216342,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.207317 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e96d49d7a40db openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:03.149822171 +0000 UTC m=+2.363193540,LastTimestamp:2026-03-20 15:39:03.149822171 +0000 UTC m=+2.363193540,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.210753 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e96d4a6fa0dbe openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:03.309192638 +0000 UTC m=+2.522564007,LastTimestamp:2026-03-20 15:39:03.309192638 +0000 UTC m=+2.522564007,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.214063 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e96d4a7dbd60d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:03.323989517 +0000 UTC m=+2.537360886,LastTimestamp:2026-03-20 15:39:03.323989517 +0000 UTC m=+2.537360886,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.217816 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e96d4b58f2306 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:03.553843974 +0000 UTC m=+2.767215343,LastTimestamp:2026-03-20 15:39:03.553843974 +0000 UTC m=+2.767215343,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.221231 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e96d4b5b248db openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:03.556147419 +0000 UTC m=+2.769518798,LastTimestamp:2026-03-20 15:39:03.556147419 +0000 UTC m=+2.769518798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.224978 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e96d4b5eb0630 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:03.559865904 +0000 UTC m=+2.773237283,LastTimestamp:2026-03-20 15:39:03.559865904 +0000 UTC m=+2.773237283,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.229648 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e96d4b6710801 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:03.568648193 +0000 UTC m=+2.782019562,LastTimestamp:2026-03-20 15:39:03.568648193 +0000 UTC m=+2.782019562,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.232901 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e96d4c2b3986f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:03.774337135 +0000 UTC m=+2.987708504,LastTimestamp:2026-03-20 15:39:03.774337135 +0000 UTC m=+2.987708504,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.236293 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e96d4c35b9035 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:03.785345077 +0000 UTC m=+2.998716446,LastTimestamp:2026-03-20 15:39:03.785345077 +0000 UTC m=+2.998716446,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.240016 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e96d4c39c53f5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:03.789589493 +0000 UTC m=+3.002960862,LastTimestamp:2026-03-20 15:39:03.789589493 +0000 UTC m=+3.002960862,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.243366 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e96d4c400d504 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:03.796176132 +0000 UTC m=+3.009547501,LastTimestamp:2026-03-20 15:39:03.796176132 +0000 UTC m=+3.009547501,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.249595 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e96d4c400d518 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:03.796176152 +0000 UTC m=+3.009547521,LastTimestamp:2026-03-20 15:39:03.796176152 +0000 UTC m=+3.009547521,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.258360 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e96d4c4398633 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:03.799891507 +0000 UTC m=+3.013262876,LastTimestamp:2026-03-20 15:39:03.799891507 +0000 UTC m=+3.013262876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.263868 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e96d4c4d99db9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:03.810383289 +0000 UTC m=+3.023754658,LastTimestamp:2026-03-20 15:39:03.810383289 +0000 UTC m=+3.023754658,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.267271 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e96d4c4eda1de openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:03.81169507 +0000 UTC m=+3.025066449,LastTimestamp:2026-03-20 15:39:03.81169507 +0000 UTC m=+3.025066449,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.271528 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e96d4c56429c1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:03.819463105 +0000 UTC m=+3.032834474,LastTimestamp:2026-03-20 15:39:03.819463105 +0000 UTC m=+3.032834474,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.275470 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e96d4c5851e4c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:03.82162286 +0000 UTC m=+3.034994229,LastTimestamp:2026-03-20 15:39:03.82162286 +0000 UTC m=+3.034994229,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.278834 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e96d4d2549def openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:04.036548079 +0000 UTC m=+3.249919448,LastTimestamp:2026-03-20 15:39:04.036548079 +0000 UTC m=+3.249919448,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.282082 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e96d4d2559e95 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:04.036613781 +0000 UTC m=+3.249985140,LastTimestamp:2026-03-20 15:39:04.036613781 +0000 UTC m=+3.249985140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.285536 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e96d4d3415863 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:04.052062307 +0000 UTC m=+3.265433676,LastTimestamp:2026-03-20 15:39:04.052062307 +0000 UTC m=+3.265433676,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.289089 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e96d4d3573c3d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:04.053496893 +0000 UTC m=+3.266868252,LastTimestamp:2026-03-20 15:39:04.053496893 +0000 UTC m=+3.266868252,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.292652 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e96d4d36b99ce openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:04.054831566 +0000 UTC m=+3.268202935,LastTimestamp:2026-03-20 15:39:04.054831566 +0000 UTC m=+3.268202935,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.296030 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e96d4d37a9e7c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:04.055815804 +0000 UTC m=+3.269187193,LastTimestamp:2026-03-20 15:39:04.055815804 +0000 UTC m=+3.269187193,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.299361 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e96d4deacfffd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:04.243666941 +0000 UTC m=+3.457038330,LastTimestamp:2026-03-20 15:39:04.243666941 +0000 UTC m=+3.457038330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.302358 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e96d4debd2ac8 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:04.244726472 +0000 UTC m=+3.458097841,LastTimestamp:2026-03-20 15:39:04.244726472 +0000 UTC m=+3.458097841,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.305793 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e96d4e00207a5 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:04.266016677 +0000 UTC m=+3.479388046,LastTimestamp:2026-03-20 15:39:04.266016677 +0000 UTC m=+3.479388046,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.309407 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e96d4e0079a39 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:04.266381881 +0000 UTC m=+3.479753250,LastTimestamp:2026-03-20 15:39:04.266381881 +0000 UTC m=+3.479753250,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.313668 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e96d4e01ddd51 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:04.267840849 +0000 UTC m=+3.481212228,LastTimestamp:2026-03-20 15:39:04.267840849 +0000 UTC m=+3.481212228,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.317197 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e96d4ec911e79 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:04.476720761 +0000 UTC m=+3.690092130,LastTimestamp:2026-03-20 15:39:04.476720761 +0000 UTC m=+3.690092130,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.320335 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e96d4ed859673 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:04.492742259 +0000 UTC m=+3.706113628,LastTimestamp:2026-03-20 15:39:04.492742259 +0000 UTC m=+3.706113628,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.323312 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e96d4ed978b3b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:04.493919035 +0000 UTC m=+3.707290414,LastTimestamp:2026-03-20 15:39:04.493919035 +0000 UTC m=+3.707290414,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.326613 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e96d4f249f6b0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:04.572720816 +0000 UTC m=+3.786092185,LastTimestamp:2026-03-20 15:39:04.572720816 +0000 UTC m=+3.786092185,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.330226 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e96d4fa00d046 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:04.702144582 +0000 UTC m=+3.915515951,LastTimestamp:2026-03-20 15:39:04.702144582 +0000 UTC m=+3.915515951,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.334156 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e96d4fb010252 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:04.71893461 +0000 UTC m=+3.932305979,LastTimestamp:2026-03-20 15:39:04.71893461 +0000 UTC m=+3.932305979,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.338107 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e96d502251847 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:04.838740039 +0000 UTC m=+4.052111408,LastTimestamp:2026-03-20 15:39:04.838740039 +0000 UTC m=+4.052111408,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.342172 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e96d505667dbd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:04.893357501 +0000 UTC m=+4.106728870,LastTimestamp:2026-03-20 15:39:04.893357501 +0000 UTC m=+4.106728870,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.345562 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e96d52f3a4fee openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:05.595105262 +0000 UTC m=+4.808476631,LastTimestamp:2026-03-20 15:39:05.595105262 +0000 UTC m=+4.808476631,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.350069 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e96d53d930a68 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:05.835801192 +0000 UTC m=+5.049172601,LastTimestamp:2026-03-20 15:39:05.835801192 +0000 UTC m=+5.049172601,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.354336 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e96d53ee5b461 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:05.857995873 +0000 UTC m=+5.071367242,LastTimestamp:2026-03-20 15:39:05.857995873 +0000 UTC m=+5.071367242,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.357610 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e96d53f00aebe openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:05.859763902 +0000 UTC m=+5.073135311,LastTimestamp:2026-03-20 15:39:05.859763902 +0000 UTC m=+5.073135311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.362197 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e96d54b95aed5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:06.070855381 +0000 UTC m=+5.284226750,LastTimestamp:2026-03-20 15:39:06.070855381 +0000 UTC m=+5.284226750,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.366542 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e96d54c238795 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:06.080151445 +0000 UTC m=+5.293522814,LastTimestamp:2026-03-20 15:39:06.080151445 +0000 UTC m=+5.293522814,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.370682 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e96d54c386411 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:06.081518609 +0000 UTC m=+5.294889998,LastTimestamp:2026-03-20 15:39:06.081518609 +0000 UTC m=+5.294889998,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.374330 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e96d55bdedbff openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:06.344086527 +0000 UTC m=+5.557457896,LastTimestamp:2026-03-20 15:39:06.344086527 +0000 UTC m=+5.557457896,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.378794 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e96d55d9e9c05 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:06.373430277 +0000 UTC m=+5.586801646,LastTimestamp:2026-03-20 15:39:06.373430277 +0000 UTC m=+5.586801646,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.382234 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e96d55db41b71 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:06.374839153 +0000 UTC m=+5.588210522,LastTimestamp:2026-03-20 15:39:06.374839153 +0000 UTC m=+5.588210522,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.386591 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e96d56aa0085b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:06.591627355 +0000 UTC m=+5.804998754,LastTimestamp:2026-03-20 15:39:06.591627355 +0000 UTC m=+5.804998754,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.390373 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e96d56ba550b8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:06.608750776 +0000 UTC m=+5.822122145,LastTimestamp:2026-03-20 15:39:06.608750776 +0000 UTC m=+5.822122145,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.394989 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e96d56bb6c7b7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:06.609895351 +0000 UTC m=+5.823266710,LastTimestamp:2026-03-20 15:39:06.609895351 +0000 UTC m=+5.823266710,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.398013 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e96d57624d8f4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:06.784880884 +0000 UTC m=+5.998252263,LastTimestamp:2026-03-20 15:39:06.784880884 +0000 UTC m=+5.998252263,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.402095 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e96d576d5ea95 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:06.796485269 +0000 UTC m=+6.009856638,LastTimestamp:2026-03-20 15:39:06.796485269 +0000 UTC m=+6.009856638,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.405976 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 20 15:39:47 crc kubenswrapper[4730]: &Event{ObjectMeta:{kube-controller-manager-crc.189e96d59c52d686 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 20 15:39:47 crc kubenswrapper[4730]: body: Mar 20 15:39:47 crc kubenswrapper[4730]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:07.425429126 +0000 UTC m=+6.638800495,LastTimestamp:2026-03-20 15:39:07.425429126 +0000 UTC m=+6.638800495,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 15:39:47 crc kubenswrapper[4730]: > Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.409918 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e96d59c53d966 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:07.425495398 +0000 UTC m=+6.638866767,LastTimestamp:2026-03-20 15:39:07.425495398 +0000 UTC m=+6.638866767,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.415281 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189e96d4ed978b3b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e96d4ed978b3b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:04.493919035 +0000 UTC m=+3.707290414,LastTimestamp:2026-03-20 15:39:16.642042504 +0000 UTC m=+15.855413883,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.420149 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189e96d4fa00d046\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e96d4fa00d046 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:04.702144582 +0000 UTC m=+3.915515951,LastTimestamp:2026-03-20 15:39:16.941657268 +0000 UTC m=+16.155028637,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.423402 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189e96d4fb010252\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e96d4fb010252 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:04.71893461 +0000 UTC m=+3.932305979,LastTimestamp:2026-03-20 15:39:16.971223083 +0000 UTC m=+16.184594452,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: I0320 15:39:47.425091 4730 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 15:39:47 crc kubenswrapper[4730]: I0320 15:39:47.425131 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.426989 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 20 15:39:47 crc kubenswrapper[4730]: &Event{ObjectMeta:{kube-apiserver-crc.189e96d7d7d86dbe openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 20 15:39:47 crc kubenswrapper[4730]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 20 15:39:47 crc kubenswrapper[4730]: Mar 20 15:39:47 crc kubenswrapper[4730]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:17.013974462 +0000 UTC m=+16.227345821,LastTimestamp:2026-03-20 15:39:17.013974462 +0000 UTC m=+16.227345821,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 15:39:47 crc kubenswrapper[4730]: > Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.430780 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e96d7d7d8f5a5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:17.014009253 +0000 UTC m=+16.227380622,LastTimestamp:2026-03-20 15:39:17.014009253 +0000 UTC m=+16.227380622,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.433927 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 20 15:39:47 crc kubenswrapper[4730]: &Event{ObjectMeta:{kube-apiserver-crc.189e96d7d816b0c7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 20 15:39:47 crc kubenswrapper[4730]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Mar 20 15:39:47 crc kubenswrapper[4730]: Mar 20 15:39:47 crc kubenswrapper[4730]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:17.018054855 +0000 UTC m=+16.231426214,LastTimestamp:2026-03-20 15:39:17.018054855 +0000 UTC m=+16.231426214,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 15:39:47 crc kubenswrapper[4730]: > Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.437181 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189e96d7d7d8f5a5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e96d7d7d8f5a5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:17.014009253 +0000 UTC m=+16.227380622,LastTimestamp:2026-03-20 15:39:17.018097676 +0000 UTC m=+16.231469045,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.440275 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 20 15:39:47 crc kubenswrapper[4730]: &Event{ObjectMeta:{kube-controller-manager-crc.189e96d7f076901f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 20 15:39:47 crc kubenswrapper[4730]: body: Mar 20 15:39:47 crc kubenswrapper[4730]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:17.426991135 +0000 UTC m=+16.640362504,LastTimestamp:2026-03-20 15:39:17.426991135 +0000 UTC m=+16.640362504,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 15:39:47 crc kubenswrapper[4730]: > Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.443382 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e96d7f0780e29 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:17.427088937 +0000 UTC m=+16.640460296,LastTimestamp:2026-03-20 15:39:17.427088937 +0000 UTC m=+16.640460296,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.448031 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e96d7f076901f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 20 15:39:47 crc kubenswrapper[4730]: &Event{ObjectMeta:{kube-controller-manager-crc.189e96d7f076901f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 20 15:39:47 crc kubenswrapper[4730]: body: Mar 20 15:39:47 crc kubenswrapper[4730]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:17.426991135 +0000 UTC m=+16.640362504,LastTimestamp:2026-03-20 15:39:27.425503088 +0000 UTC m=+26.638874487,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 15:39:47 crc kubenswrapper[4730]: > Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.451490 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e96d7f0780e29\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e96d7f0780e29 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:17.427088937 +0000 UTC m=+16.640460296,LastTimestamp:2026-03-20 15:39:27.4256045 +0000 UTC m=+26.638975909,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.454973 4730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e96da44e951c1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:27.433732545 +0000 UTC m=+26.647103954,LastTimestamp:2026-03-20 15:39:27.433732545 +0000 UTC m=+26.647103954,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.458548 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e96d48179d95a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e96d48179d95a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:02.680033626 +0000 UTC m=+1.893404995,LastTimestamp:2026-03-20 15:39:27.587768103 +0000 UTC m=+26.801139502,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.461917 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e96d492a67a90\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e96d492a67a90 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:02.968171152 +0000 UTC m=+2.181542521,LastTimestamp:2026-03-20 15:39:27.826369865 +0000 UTC m=+27.039741244,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: I0320 15:39:47.465386 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.465431 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e96d49342e742\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e96d49342e742 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:02.978422594 +0000 UTC m=+2.191793963,LastTimestamp:2026-03-20 15:39:27.840012341 +0000 UTC m=+27.053383720,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.469975 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e96d7f076901f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 20 15:39:47 crc kubenswrapper[4730]: &Event{ObjectMeta:{kube-controller-manager-crc.189e96d7f076901f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 20 15:39:47 crc kubenswrapper[4730]: body: Mar 20 15:39:47 crc kubenswrapper[4730]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:17.426991135 +0000 UTC m=+16.640362504,LastTimestamp:2026-03-20 15:39:37.425645017 +0000 UTC m=+36.639016406,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 15:39:47 crc kubenswrapper[4730]: > Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.473106 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e96d7f0780e29\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e96d7f0780e29 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:17.427088937 +0000 UTC m=+16.640460296,LastTimestamp:2026-03-20 15:39:37.425691458 +0000 UTC m=+36.639062837,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.476913 4730 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e96d7f076901f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 20 15:39:47 crc kubenswrapper[4730]: &Event{ObjectMeta:{kube-controller-manager-crc.189e96d7f076901f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 20 15:39:47 crc kubenswrapper[4730]: body: Mar 20 15:39:47 crc kubenswrapper[4730]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:39:17.426991135 +0000 UTC m=+16.640362504,LastTimestamp:2026-03-20 15:39:47.42511929 +0000 UTC m=+46.638490659,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 15:39:47 crc kubenswrapper[4730]: > Mar 20 15:39:47 crc kubenswrapper[4730]: W0320 15:39:47.574193 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 20 15:39:47 crc kubenswrapper[4730]: E0320 15:39:47.574242 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 20 15:39:48 crc kubenswrapper[4730]: I0320 15:39:48.470603 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 15:39:49 crc kubenswrapper[4730]: I0320 15:39:49.470801 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 15:39:50 crc kubenswrapper[4730]: W0320 15:39:50.248653 4730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 20 15:39:50 crc kubenswrapper[4730]: E0320 15:39:50.249005 4730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 20 15:39:50 crc kubenswrapper[4730]: I0320 15:39:50.469791 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 15:39:51 crc kubenswrapper[4730]: I0320 15:39:51.433441 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:51 crc kubenswrapper[4730]: I0320 15:39:51.435060 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:51 crc kubenswrapper[4730]: I0320 15:39:51.435138 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:51 crc kubenswrapper[4730]: I0320 15:39:51.435160 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:51 crc kubenswrapper[4730]: I0320 15:39:51.435209 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 15:39:51 crc kubenswrapper[4730]: E0320 15:39:51.440686 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 20 15:39:51 crc kubenswrapper[4730]: E0320 15:39:51.440747 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 20 15:39:51 crc kubenswrapper[4730]: I0320 15:39:51.468783 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 15:39:51 crc kubenswrapper[4730]: E0320 15:39:51.610838 4730 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 15:39:52 crc kubenswrapper[4730]: I0320 15:39:52.469613 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 15:39:53 crc kubenswrapper[4730]: I0320 15:39:53.470420 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 15:39:54 crc kubenswrapper[4730]: I0320 15:39:54.430774 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 15:39:54 crc kubenswrapper[4730]: I0320 15:39:54.430931 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:54 crc kubenswrapper[4730]: I0320 15:39:54.431943 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:54 crc kubenswrapper[4730]: I0320 15:39:54.431967 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:54 crc kubenswrapper[4730]: I0320 15:39:54.431977 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:54 crc kubenswrapper[4730]: I0320 15:39:54.436705 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 15:39:54 crc kubenswrapper[4730]: I0320 15:39:54.472449 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 15:39:54 crc kubenswrapper[4730]: I0320 15:39:54.777969 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:54 crc kubenswrapper[4730]: I0320 15:39:54.779220 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:54 crc kubenswrapper[4730]: I0320 15:39:54.779277 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:54 crc kubenswrapper[4730]: I0320 15:39:54.779293 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:55 crc kubenswrapper[4730]: I0320 15:39:55.471494 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 15:39:56 crc kubenswrapper[4730]: I0320 15:39:56.024781 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 15:39:56 crc kubenswrapper[4730]: I0320 15:39:56.024938 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:56 crc kubenswrapper[4730]: I0320 15:39:56.025907 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:56 crc kubenswrapper[4730]: I0320 15:39:56.025932 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:56 crc kubenswrapper[4730]: I0320 15:39:56.025941 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:56 crc kubenswrapper[4730]: I0320 15:39:56.469428 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 15:39:57 crc kubenswrapper[4730]: I0320 15:39:57.468608 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 15:39:58 crc kubenswrapper[4730]: I0320 15:39:58.441744 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:58 crc kubenswrapper[4730]: I0320 15:39:58.442902 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:58 crc kubenswrapper[4730]: I0320 15:39:58.442943 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:58 crc kubenswrapper[4730]: I0320 15:39:58.442956 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:58 crc kubenswrapper[4730]: I0320 15:39:58.442982 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 15:39:58 crc kubenswrapper[4730]: E0320 15:39:58.446169 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 20 15:39:58 crc kubenswrapper[4730]: E0320 15:39:58.446422 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 20 15:39:58 crc kubenswrapper[4730]: I0320 15:39:58.467010 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 15:39:58 crc kubenswrapper[4730]: I0320 15:39:58.532858 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:58 crc kubenswrapper[4730]: I0320 15:39:58.534019 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:58 crc kubenswrapper[4730]: I0320 15:39:58.534061 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:58 crc kubenswrapper[4730]: I0320 15:39:58.534070 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:58 crc kubenswrapper[4730]: I0320 15:39:58.534648 4730 scope.go:117] "RemoveContainer" containerID="e2a38d11938583eb373fcd731b30daf31d61c71c2ace80683efb60769ab0b694" Mar 20 15:39:58 crc kubenswrapper[4730]: I0320 15:39:58.789122 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 20 15:39:58 crc kubenswrapper[4730]: I0320 15:39:58.790764 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5"} Mar 20 15:39:58 crc kubenswrapper[4730]: I0320 15:39:58.790908 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:58 crc kubenswrapper[4730]: I0320 15:39:58.793412 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:58 crc kubenswrapper[4730]: I0320 15:39:58.793437 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:58 crc kubenswrapper[4730]: I0320 15:39:58.793445 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:59 crc kubenswrapper[4730]: I0320 15:39:59.470962 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 15:39:59 crc kubenswrapper[4730]: I0320 15:39:59.794374 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 20 15:39:59 crc kubenswrapper[4730]: I0320 15:39:59.794862 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 20 15:39:59 crc kubenswrapper[4730]: I0320 15:39:59.797160 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5" exitCode=255 Mar 20 15:39:59 crc kubenswrapper[4730]: I0320 15:39:59.797201 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5"} Mar 20 15:39:59 crc kubenswrapper[4730]: I0320 15:39:59.797239 4730 scope.go:117] "RemoveContainer" containerID="e2a38d11938583eb373fcd731b30daf31d61c71c2ace80683efb60769ab0b694" Mar 20 15:39:59 crc kubenswrapper[4730]: I0320 15:39:59.797439 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:39:59 crc kubenswrapper[4730]: I0320 15:39:59.798589 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:39:59 crc kubenswrapper[4730]: I0320 15:39:59.798702 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:39:59 crc kubenswrapper[4730]: I0320 15:39:59.798731 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:39:59 crc kubenswrapper[4730]: I0320 15:39:59.799630 4730 scope.go:117] "RemoveContainer" containerID="688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5" Mar 20 15:39:59 crc kubenswrapper[4730]: E0320 15:39:59.799921 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 15:40:00 crc kubenswrapper[4730]: I0320 15:40:00.469830 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 15:40:00 crc kubenswrapper[4730]: I0320 15:40:00.800841 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 20 15:40:01 crc kubenswrapper[4730]: I0320 15:40:01.469712 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 15:40:01 crc kubenswrapper[4730]: E0320 15:40:01.611775 4730 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 15:40:02 crc kubenswrapper[4730]: I0320 15:40:02.471077 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 15:40:02 crc kubenswrapper[4730]: I0320 15:40:02.864939 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:40:02 crc kubenswrapper[4730]: I0320 15:40:02.865412 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:40:02 crc kubenswrapper[4730]: I0320 15:40:02.866889 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:02 crc kubenswrapper[4730]: I0320 15:40:02.866994 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:02 crc kubenswrapper[4730]: I0320 15:40:02.867057 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:02 crc kubenswrapper[4730]: I0320 15:40:02.867677 4730 scope.go:117] "RemoveContainer" containerID="688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5" Mar 20 15:40:02 crc kubenswrapper[4730]: E0320 15:40:02.867902 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 15:40:03 crc kubenswrapper[4730]: I0320 15:40:03.470047 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 15:40:03 crc kubenswrapper[4730]: I0320 15:40:03.967369 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:40:03 crc kubenswrapper[4730]: I0320 15:40:03.967567 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:40:03 crc kubenswrapper[4730]: I0320 15:40:03.968689 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:03 crc kubenswrapper[4730]: I0320 15:40:03.968738 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:03 crc kubenswrapper[4730]: I0320 15:40:03.968756 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:03 crc kubenswrapper[4730]: I0320 15:40:03.969569 4730 scope.go:117] "RemoveContainer" containerID="688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5" Mar 20 15:40:03 crc kubenswrapper[4730]: E0320 15:40:03.969863 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 15:40:04 crc kubenswrapper[4730]: I0320 15:40:04.469975 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 15:40:05 crc kubenswrapper[4730]: I0320 15:40:05.446230 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:40:05 crc kubenswrapper[4730]: I0320 15:40:05.447120 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:05 crc kubenswrapper[4730]: I0320 15:40:05.447170 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:05 crc kubenswrapper[4730]: I0320 15:40:05.447184 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:05 crc kubenswrapper[4730]: I0320 15:40:05.447212 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 15:40:05 crc kubenswrapper[4730]: E0320 15:40:05.450876 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 20 15:40:05 crc kubenswrapper[4730]: E0320 15:40:05.450920 4730 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 20 15:40:05 crc kubenswrapper[4730]: I0320 15:40:05.468341 4730 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 15:40:06 crc kubenswrapper[4730]: I0320 15:40:06.086341 4730 csr.go:261] certificate signing request csr-72rl6 is approved, waiting to be issued Mar 20 15:40:06 crc kubenswrapper[4730]: I0320 15:40:06.093759 4730 csr.go:257] certificate signing request csr-72rl6 is issued Mar 20 15:40:06 crc kubenswrapper[4730]: I0320 15:40:06.151617 4730 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 20 15:40:06 crc kubenswrapper[4730]: I0320 15:40:06.261059 4730 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 20 15:40:07 crc kubenswrapper[4730]: I0320 15:40:07.095156 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-09 20:16:13.685260601 +0000 UTC Mar 20 15:40:07 crc kubenswrapper[4730]: I0320 15:40:07.095202 4730 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6340h36m6.5900616s for next certificate rotation Mar 20 15:40:11 crc kubenswrapper[4730]: E0320 15:40:11.612862 4730 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.451722 4730 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.453022 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.453059 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.453071 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.453183 4730 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.456930 4730 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.460727 4730 kubelet_node_status.go:115] "Node was previously registered" node="crc" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.461093 4730 kubelet_node_status.go:79] "Successfully registered node" node="crc" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.462760 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.462795 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.462807 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.462826 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.462838 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:12Z","lastTransitionTime":"2026-03-20T15:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.480794 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.485527 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.485652 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.485678 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.485708 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.485731 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:12Z","lastTransitionTime":"2026-03-20T15:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.493571 4730 apiserver.go:52] "Watching apiserver" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.501861 4730 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.502698 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-6r2kn","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-operator/iptables-alerter-4ln5h","openshift-machine-config-operator/machine-config-daemon-p5qvf","openshift-image-registry/node-ca-n4w74","openshift-multus/multus-additional-cni-plugins-49hht","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh","openshift-dns/node-resolver-69fnw","openshift-ovn-kubernetes/ovnkube-node-qj97f"] Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.504463 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.504998 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.505520 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.505706 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.506084 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.506271 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.506506 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.507018 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.507674 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.507782 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.507915 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.508679 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-n4w74" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.509212 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.509488 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.509653 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.509788 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-69fnw" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.508689 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-49hht" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.510745 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.512802 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.512844 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.512910 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.513394 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.513795 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.514160 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.515297 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.515491 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.515520 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.515652 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.515775 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.518102 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.524662 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.524821 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.525269 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.525521 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.525808 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.526180 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.525406 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.526198 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.526459 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.526495 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.526549 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.526578 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.526626 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.526575 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.526882 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.526933 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.527798 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.528547 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.528722 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.528812 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.528855 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.528911 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.529011 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.529309 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.530184 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.532967 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.533643 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.533748 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.533819 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.533881 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:12Z","lastTransitionTime":"2026-03-20T15:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.552244 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.556212 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.563770 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.563828 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.563856 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.563891 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.563916 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:12Z","lastTransitionTime":"2026-03-20T15:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.571327 4730 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.572998 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.582060 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.585613 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.585643 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.585652 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.585665 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.585675 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:12Z","lastTransitionTime":"2026-03-20T15:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.588187 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.607484 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.607639 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.607887 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.611377 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.611447 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.611474 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.611497 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.611518 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.611537 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.611557 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.611581 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.611623 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.611645 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.611670 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.611691 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.611710 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.611730 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.611751 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.611770 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.611785 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.611804 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.611826 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.611846 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.611867 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.611889 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.611912 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.611934 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.611955 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.611978 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612000 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612024 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612044 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612064 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612085 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612106 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612128 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612149 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612171 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612193 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612213 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612234 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612295 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612320 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612341 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612361 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612380 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612400 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612422 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612444 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612466 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612486 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612506 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612526 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612546 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612566 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612586 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612617 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612640 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612658 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612679 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.612700 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.617066 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.617241 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.617654 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.617754 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.617986 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.618112 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.618174 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.618371 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.618749 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.618872 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.619073 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.619075 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.619271 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.619454 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.619728 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.619785 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.619923 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.620103 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.620111 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.620500 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.620468 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.620832 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.621746 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.622078 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.622506 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.622514 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.622562 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.622574 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.622588 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.622597 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:12Z","lastTransitionTime":"2026-03-20T15:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.622869 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.622914 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.623400 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.623415 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.623986 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.624072 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.624427 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.624707 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.624826 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.625146 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.625562 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.626188 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.626688 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.627633 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.627777 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.628032 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.628345 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.628594 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.628732 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.628835 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.628901 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.629194 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.629412 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.629564 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.629895 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.629981 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.630149 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.630207 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.630237 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.630271 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.630289 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.630306 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.630322 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.630338 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.630354 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.630369 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.630384 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.630394 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.630404 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.630519 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.630545 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.630753 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.630779 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.630799 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.630844 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.630866 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.630888 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.630905 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.630930 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.630951 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.630991 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.631014 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.631034 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.631076 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.631099 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.631144 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.631163 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.631225 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.631267 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.631290 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.631313 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.631350 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.631370 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.631391 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.631432 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.631451 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.631472 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.631512 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.631534 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.631552 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.631605 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.631626 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.631379 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.631968 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.632354 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.632403 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.632703 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.632742 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.632971 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.633009 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.633191 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.633212 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.633500 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.633514 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.633914 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.634025 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.634193 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.634486 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.634829 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.634818 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.635344 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.636406 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.636490 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.636717 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.636867 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.636980 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.637185 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.637228 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.637407 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.637736 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.638005 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.638022 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.638011 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.638082 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.638199 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.638444 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.638580 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.638851 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.638720 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.638993 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.639372 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.639465 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.640611 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.640658 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.640686 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.646012 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.645946 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.646173 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.646351 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.646389 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.646411 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.646430 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.646690 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.646728 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.646747 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.647217 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.647385 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.647549 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.647621 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.647702 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.647783 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.647856 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.647934 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.648009 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.648085 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.648151 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.648222 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.648322 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.648392 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.648491 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.648584 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.648671 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.648761 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.648851 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.648926 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.648993 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.649060 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.649200 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.649284 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.649360 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.649431 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.649571 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.649661 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.649737 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.649808 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.649875 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.649939 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.650099 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.650166 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.650349 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.650447 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.650545 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.650625 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.650700 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.650771 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.650838 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.650909 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.651033 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.651100 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.651166 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.651232 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.651338 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.651405 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.651470 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.651540 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.651608 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.651680 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.651783 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.651881 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.651973 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.652072 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.652171 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.652299 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.652416 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.652527 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.652637 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.652743 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.652850 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.652955 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.653081 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.653225 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.653456 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.653586 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.653692 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.653793 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.653890 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.653997 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.654113 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.654213 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.654332 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.654458 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.654646 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.654750 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.654851 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.654950 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.655045 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.655145 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.655274 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.655434 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.655801 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.647233 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.648330 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.649080 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.649978 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.652089 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.652421 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.652635 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.652913 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.654042 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.654236 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.654446 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.654774 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.655065 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.655339 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.655978 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.657063 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.657560 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.657789 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.657833 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.657852 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.658127 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.658305 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.658429 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.658477 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.658596 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.658679 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.658666 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.658942 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.658974 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.659063 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.659569 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-host-run-netns\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.659622 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c4b4e0e8-af33-491e-b1d1-31079d90c656-ovn-node-metrics-cert\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.659671 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.658889 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.660179 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.660282 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dbb015c0-3a11-48bf-a59f-22bc03ca2fb9-tuning-conf-dir\") pod \"multus-additional-cni-plugins-49hht\" (UID: \"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\") " pod="openshift-multus/multus-additional-cni-plugins-49hht" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.660374 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.660403 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.660485 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.660528 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.660642 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.660858 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.663090 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.663401 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.663434 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.663530 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.663571 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.663589 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.663737 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.663767 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.663790 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.663918 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.664196 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.664277 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:40:13.164238304 +0000 UTC m=+72.377609753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.664326 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.664203 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.664453 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-system-cni-dir\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.664751 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.664818 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7fcd3db3-55f1-4c23-8fa9-78844495cea3-rootfs\") pod \"machine-config-daemon-p5qvf\" (UID: \"7fcd3db3-55f1-4c23-8fa9-78844495cea3\") " pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.665076 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.665399 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.665424 4730 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.665517 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.665425 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.665632 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.665658 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.665688 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-cni-bin\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.665716 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-p47zh\" (UID: \"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.665743 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-multus-daemon-config\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.665778 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.665787 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.665801 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-run-netns\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.665828 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-var-lib-openvswitch\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.665855 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-cni-binary-copy\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.665867 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.665880 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-cnibin\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.665902 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-p47zh\" (UID: \"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.665926 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-etc-openvswitch\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.665956 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz64b\" (UniqueName: \"kubernetes.io/projected/c4b4e0e8-af33-491e-b1d1-31079d90c656-kube-api-access-mz64b\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.665986 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dbb015c0-3a11-48bf-a59f-22bc03ca2fb9-cni-binary-copy\") pod \"multus-additional-cni-plugins-49hht\" (UID: \"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\") " pod="openshift-multus/multus-additional-cni-plugins-49hht" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.666009 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.666043 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-run-ovn\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.666102 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvthz\" (UniqueName: \"kubernetes.io/projected/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-kube-api-access-vvthz\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.666130 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4xpw\" (UniqueName: \"kubernetes.io/projected/a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0-kube-api-access-d4xpw\") pod \"ovnkube-control-plane-749d76644c-p47zh\" (UID: \"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.666158 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.666192 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.666210 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-multus-socket-dir-parent\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.666232 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.666237 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-host-var-lib-cni-bin\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.666350 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-host-run-multus-certs\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.666375 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-kubelet\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.666398 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/dbb015c0-3a11-48bf-a59f-22bc03ca2fb9-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-49hht\" (UID: \"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\") " pod="openshift-multus/multus-additional-cni-plugins-49hht" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.666523 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-p47zh\" (UID: \"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.666566 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/102cb977-7291-453e-9282-20572071afee-hosts-file\") pod \"node-resolver-69fnw\" (UID: \"102cb977-7291-453e-9282-20572071afee\") " pod="openshift-dns/node-resolver-69fnw" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.666617 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.666642 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-run-ovn-kubernetes\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.666721 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.666822 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-cni-netd\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.666831 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.666846 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c4b4e0e8-af33-491e-b1d1-31079d90c656-env-overrides\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.666870 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qtg2\" (UniqueName: \"kubernetes.io/projected/dbb015c0-3a11-48bf-a59f-22bc03ca2fb9-kube-api-access-4qtg2\") pod \"multus-additional-cni-plugins-49hht\" (UID: \"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\") " pod="openshift-multus/multus-additional-cni-plugins-49hht" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.666902 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-multus-cni-dir\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.666921 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-os-release\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.666934 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.666942 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-multus-conf-dir\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667137 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-node-log\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667172 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dbb015c0-3a11-48bf-a59f-22bc03ca2fb9-os-release\") pod \"multus-additional-cni-plugins-49hht\" (UID: \"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\") " pod="openshift-multus/multus-additional-cni-plugins-49hht" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667196 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7fcd3db3-55f1-4c23-8fa9-78844495cea3-mcd-auth-proxy-config\") pod \"machine-config-daemon-p5qvf\" (UID: \"7fcd3db3-55f1-4c23-8fa9-78844495cea3\") " pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667217 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2ee8d55f-90bd-4484-8455-933de455efea-serviceca\") pod \"node-ca-n4w74\" (UID: \"2ee8d55f-90bd-4484-8455-933de455efea\") " pod="openshift-image-registry/node-ca-n4w74" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667258 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667283 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667291 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667305 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-slash\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667331 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7fcd3db3-55f1-4c23-8fa9-78844495cea3-proxy-tls\") pod \"machine-config-daemon-p5qvf\" (UID: \"7fcd3db3-55f1-4c23-8fa9-78844495cea3\") " pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667351 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-host-var-lib-cni-multus\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667373 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-hostroot\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667393 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-run-systemd\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667412 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-run-openvswitch\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667432 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dbb015c0-3a11-48bf-a59f-22bc03ca2fb9-cnibin\") pod \"multus-additional-cni-plugins-49hht\" (UID: \"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\") " pod="openshift-multus/multus-additional-cni-plugins-49hht" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667454 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-host-var-lib-kubelet\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667478 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-systemd-units\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667497 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-log-socket\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667520 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667548 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667572 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dbb015c0-3a11-48bf-a59f-22bc03ca2fb9-system-cni-dir\") pod \"multus-additional-cni-plugins-49hht\" (UID: \"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\") " pod="openshift-multus/multus-additional-cni-plugins-49hht" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667373 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667432 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667588 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.667567 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.667658 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 15:40:13.167642267 +0000 UTC m=+72.381013636 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667684 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667762 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-etc-kubernetes\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667803 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c4b4e0e8-af33-491e-b1d1-31079d90c656-ovnkube-config\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667821 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plthx\" (UniqueName: \"kubernetes.io/projected/102cb977-7291-453e-9282-20572071afee-kube-api-access-plthx\") pod \"node-resolver-69fnw\" (UID: \"102cb977-7291-453e-9282-20572071afee\") " pod="openshift-dns/node-resolver-69fnw" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667808 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667862 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.667923 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.667933 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.667974 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 15:40:13.167960866 +0000 UTC m=+72.381332365 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.668004 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2ee8d55f-90bd-4484-8455-933de455efea-host\") pod \"node-ca-n4w74\" (UID: \"2ee8d55f-90bd-4484-8455-933de455efea\") " pod="openshift-image-registry/node-ca-n4w74" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.668036 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fvg6\" (UniqueName: \"kubernetes.io/projected/2ee8d55f-90bd-4484-8455-933de455efea-kube-api-access-2fvg6\") pod \"node-ca-n4w74\" (UID: \"2ee8d55f-90bd-4484-8455-933de455efea\") " pod="openshift-image-registry/node-ca-n4w74" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.668062 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzk8j\" (UniqueName: \"kubernetes.io/projected/7fcd3db3-55f1-4c23-8fa9-78844495cea3-kube-api-access-lzk8j\") pod \"machine-config-daemon-p5qvf\" (UID: \"7fcd3db3-55f1-4c23-8fa9-78844495cea3\") " pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.668090 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.668114 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c4b4e0e8-af33-491e-b1d1-31079d90c656-ovnkube-script-lib\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.668141 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.668155 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.668172 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-host-run-k8s-cni-cncf-io\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.668318 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.668415 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.668608 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.669171 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.669198 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.669235 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.669731 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670042 4730 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670060 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670071 4730 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670082 4730 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670083 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670084 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670095 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670134 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670148 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670160 4730 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670171 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670181 4730 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670190 4730 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670199 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670209 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670219 4730 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670228 4730 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670236 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.660819 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670271 4730 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670299 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670310 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670323 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670343 4730 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670353 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670363 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670374 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670384 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670394 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670433 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670389 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670489 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.670694 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.660921 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.660952 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.671178 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.661310 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.674947 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675023 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675041 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675051 4730 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675071 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675090 4730 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675101 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675111 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675120 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675131 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675140 4730 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675149 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675162 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675171 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675180 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675239 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675292 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675306 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675314 4730 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675323 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675332 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675341 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675351 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675360 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675368 4730 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675379 4730 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675388 4730 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675397 4730 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675406 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675415 4730 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675423 4730 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675433 4730 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675440 4730 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675448 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675457 4730 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675465 4730 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675473 4730 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675489 4730 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675498 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675506 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.675515 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676091 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676211 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676222 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676231 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676239 4730 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676261 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676269 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676282 4730 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676295 4730 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676308 4730 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676320 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676333 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676344 4730 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676355 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676365 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676373 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676382 4730 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676390 4730 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676398 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676407 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676415 4730 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676423 4730 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676431 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676439 4730 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676446 4730 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676455 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676463 4730 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676471 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676481 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676489 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676498 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676507 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676516 4730 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676524 4730 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676533 4730 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676541 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676550 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676559 4730 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676566 4730 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676574 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676582 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676589 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676600 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676608 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676616 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676626 4730 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676634 4730 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676643 4730 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676651 4730 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676659 4730 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676669 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676678 4730 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676686 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676695 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676703 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676711 4730 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676719 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676727 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676735 4730 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676743 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676752 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676760 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676768 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676776 4730 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676785 4730 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676794 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676803 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676811 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676820 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676828 4730 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676837 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676844 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676853 4730 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676862 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676873 4730 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676883 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676892 4730 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676901 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676909 4730 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676918 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676926 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676934 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676944 4730 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676952 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676960 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676969 4730 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676977 4730 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676985 4730 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.676993 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.677001 4730 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.677010 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.677018 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.677027 4730 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.677035 4730 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.677044 4730 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.677052 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.677061 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.677069 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.677079 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.677086 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.677094 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.677102 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.678131 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.681349 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.681493 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.681509 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.681520 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.681571 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 15:40:13.181554725 +0000 UTC m=+72.394926094 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.681993 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.682901 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.683340 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.683366 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.683580 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.684002 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.684105 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.684125 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.684142 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.684194 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 15:40:13.184175446 +0000 UTC m=+72.397546815 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.684233 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.684875 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.685604 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.687384 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.691504 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.693682 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.696818 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.700183 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.703504 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.708629 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.710981 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.719808 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.724934 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.724967 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.724977 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.724992 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.725002 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:12Z","lastTransitionTime":"2026-03-20T15:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.727585 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.741428 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.751149 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.755854 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-2prfn"] Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.756376 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.756431 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.759731 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.768303 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.777025 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.777441 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-multus-daemon-config\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.777483 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-run-netns\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.777507 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-var-lib-openvswitch\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.777528 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-cnibin\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.777543 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-cni-binary-copy\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.777558 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dbb015c0-3a11-48bf-a59f-22bc03ca2fb9-cni-binary-copy\") pod \"multus-additional-cni-plugins-49hht\" (UID: \"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\") " pod="openshift-multus/multus-additional-cni-plugins-49hht" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.777575 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-p47zh\" (UID: \"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.777589 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-etc-openvswitch\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.777605 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz64b\" (UniqueName: \"kubernetes.io/projected/c4b4e0e8-af33-491e-b1d1-31079d90c656-kube-api-access-mz64b\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.777619 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.777636 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvthz\" (UniqueName: \"kubernetes.io/projected/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-kube-api-access-vvthz\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.777655 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-run-ovn\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.777672 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-kubelet\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.777691 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/dbb015c0-3a11-48bf-a59f-22bc03ca2fb9-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-49hht\" (UID: \"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\") " pod="openshift-multus/multus-additional-cni-plugins-49hht" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.777709 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-p47zh\" (UID: \"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.777724 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4xpw\" (UniqueName: \"kubernetes.io/projected/a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0-kube-api-access-d4xpw\") pod \"ovnkube-control-plane-749d76644c-p47zh\" (UID: \"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.777740 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-multus-socket-dir-parent\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.777739 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-run-netns\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.777792 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-host-var-lib-cni-bin\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.777755 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-host-var-lib-cni-bin\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.777831 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-etc-openvswitch\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.777852 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-host-run-multus-certs\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.777926 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs\") pod \"network-metrics-daemon-2prfn\" (UID: \"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\") " pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.777977 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qtg2\" (UniqueName: \"kubernetes.io/projected/dbb015c0-3a11-48bf-a59f-22bc03ca2fb9-kube-api-access-4qtg2\") pod \"multus-additional-cni-plugins-49hht\" (UID: \"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\") " pod="openshift-multus/multus-additional-cni-plugins-49hht" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778012 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/102cb977-7291-453e-9282-20572071afee-hosts-file\") pod \"node-resolver-69fnw\" (UID: \"102cb977-7291-453e-9282-20572071afee\") " pod="openshift-dns/node-resolver-69fnw" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778051 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-run-ovn-kubernetes\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778055 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-host-run-multus-certs\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778081 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-cni-netd\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778117 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-cni-netd\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778140 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c4b4e0e8-af33-491e-b1d1-31079d90c656-env-overrides\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778201 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dbb015c0-3a11-48bf-a59f-22bc03ca2fb9-os-release\") pod \"multus-additional-cni-plugins-49hht\" (UID: \"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\") " pod="openshift-multus/multus-additional-cni-plugins-49hht" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778213 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-p47zh\" (UID: \"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778226 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778221 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7fcd3db3-55f1-4c23-8fa9-78844495cea3-mcd-auth-proxy-config\") pod \"machine-config-daemon-p5qvf\" (UID: \"7fcd3db3-55f1-4c23-8fa9-78844495cea3\") " pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778295 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-multus-cni-dir\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778351 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-os-release\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778370 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-multus-conf-dir\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778387 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-node-log\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778433 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7xml\" (UniqueName: \"kubernetes.io/projected/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-kube-api-access-m7xml\") pod \"network-metrics-daemon-2prfn\" (UID: \"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\") " pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778437 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-run-ovn\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778457 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2ee8d55f-90bd-4484-8455-933de455efea-serviceca\") pod \"node-ca-n4w74\" (UID: \"2ee8d55f-90bd-4484-8455-933de455efea\") " pod="openshift-image-registry/node-ca-n4w74" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778453 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-run-ovn-kubernetes\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778480 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-kubelet\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778465 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/102cb977-7291-453e-9282-20572071afee-hosts-file\") pod \"node-resolver-69fnw\" (UID: \"102cb977-7291-453e-9282-20572071afee\") " pod="openshift-dns/node-resolver-69fnw" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778508 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-slash\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778527 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dbb015c0-3a11-48bf-a59f-22bc03ca2fb9-cnibin\") pod \"multus-additional-cni-plugins-49hht\" (UID: \"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\") " pod="openshift-multus/multus-additional-cni-plugins-49hht" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778549 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7fcd3db3-55f1-4c23-8fa9-78844495cea3-proxy-tls\") pod \"machine-config-daemon-p5qvf\" (UID: \"7fcd3db3-55f1-4c23-8fa9-78844495cea3\") " pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778664 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-multus-socket-dir-parent\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778682 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-host-var-lib-cni-multus\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778707 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-hostroot\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778752 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-run-systemd\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778776 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-run-openvswitch\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778830 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dbb015c0-3a11-48bf-a59f-22bc03ca2fb9-system-cni-dir\") pod \"multus-additional-cni-plugins-49hht\" (UID: \"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\") " pod="openshift-multus/multus-additional-cni-plugins-49hht" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778853 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-host-var-lib-kubelet\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778898 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-cni-binary-copy\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778915 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-p47zh\" (UID: \"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778956 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-systemd-units\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778987 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-host-var-lib-cni-multus\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779017 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-multus-conf-dir\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779018 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-multus-cni-dir\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779043 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-node-log\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779060 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-log-socket\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779079 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dbb015c0-3a11-48bf-a59f-22bc03ca2fb9-os-release\") pod \"multus-additional-cni-plugins-49hht\" (UID: \"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\") " pod="openshift-multus/multus-additional-cni-plugins-49hht" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779081 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779121 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plthx\" (UniqueName: \"kubernetes.io/projected/102cb977-7291-453e-9282-20572071afee-kube-api-access-plthx\") pod \"node-resolver-69fnw\" (UID: \"102cb977-7291-453e-9282-20572071afee\") " pod="openshift-dns/node-resolver-69fnw" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779157 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-etc-kubernetes\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779177 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c4b4e0e8-af33-491e-b1d1-31079d90c656-ovnkube-config\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779198 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c4b4e0e8-af33-491e-b1d1-31079d90c656-ovnkube-script-lib\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779217 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c4b4e0e8-af33-491e-b1d1-31079d90c656-env-overrides\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779225 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2ee8d55f-90bd-4484-8455-933de455efea-host\") pod \"node-ca-n4w74\" (UID: \"2ee8d55f-90bd-4484-8455-933de455efea\") " pod="openshift-image-registry/node-ca-n4w74" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778300 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-cnibin\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779265 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fvg6\" (UniqueName: \"kubernetes.io/projected/2ee8d55f-90bd-4484-8455-933de455efea-kube-api-access-2fvg6\") pod \"node-ca-n4w74\" (UID: \"2ee8d55f-90bd-4484-8455-933de455efea\") " pod="openshift-image-registry/node-ca-n4w74" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779291 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzk8j\" (UniqueName: \"kubernetes.io/projected/7fcd3db3-55f1-4c23-8fa9-78844495cea3-kube-api-access-lzk8j\") pod \"machine-config-daemon-p5qvf\" (UID: \"7fcd3db3-55f1-4c23-8fa9-78844495cea3\") " pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779310 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-os-release\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779329 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-host-run-k8s-cni-cncf-io\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779354 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dbb015c0-3a11-48bf-a59f-22bc03ca2fb9-tuning-conf-dir\") pod \"multus-additional-cni-plugins-49hht\" (UID: \"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\") " pod="openshift-multus/multus-additional-cni-plugins-49hht" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779376 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-system-cni-dir\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779398 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-host-run-netns\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779421 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c4b4e0e8-af33-491e-b1d1-31079d90c656-ovn-node-metrics-cert\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779446 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-p47zh\" (UID: \"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779471 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7fcd3db3-55f1-4c23-8fa9-78844495cea3-rootfs\") pod \"machine-config-daemon-p5qvf\" (UID: \"7fcd3db3-55f1-4c23-8fa9-78844495cea3\") " pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779476 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779505 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779529 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779542 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-cni-bin\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779731 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779741 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dbb015c0-3a11-48bf-a59f-22bc03ca2fb9-cni-binary-copy\") pod \"multus-additional-cni-plugins-49hht\" (UID: \"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\") " pod="openshift-multus/multus-additional-cni-plugins-49hht" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779771 4730 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.778351 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-multus-daemon-config\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.777676 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-var-lib-openvswitch\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779933 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/dbb015c0-3a11-48bf-a59f-22bc03ca2fb9-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-49hht\" (UID: \"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\") " pod="openshift-multus/multus-additional-cni-plugins-49hht" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.779937 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2ee8d55f-90bd-4484-8455-933de455efea-serviceca\") pod \"node-ca-n4w74\" (UID: \"2ee8d55f-90bd-4484-8455-933de455efea\") " pod="openshift-image-registry/node-ca-n4w74" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.780100 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.780123 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-etc-kubernetes\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.780150 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2ee8d55f-90bd-4484-8455-933de455efea-host\") pod \"node-ca-n4w74\" (UID: \"2ee8d55f-90bd-4484-8455-933de455efea\") " pod="openshift-image-registry/node-ca-n4w74" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.780147 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7fcd3db3-55f1-4c23-8fa9-78844495cea3-mcd-auth-proxy-config\") pod \"machine-config-daemon-p5qvf\" (UID: \"7fcd3db3-55f1-4c23-8fa9-78844495cea3\") " pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.780176 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7fcd3db3-55f1-4c23-8fa9-78844495cea3-rootfs\") pod \"machine-config-daemon-p5qvf\" (UID: \"7fcd3db3-55f1-4c23-8fa9-78844495cea3\") " pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.780175 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-host-run-k8s-cni-cncf-io\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.780325 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-system-cni-dir\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.780489 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-cni-bin\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.780486 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-log-socket\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.780528 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-host-run-netns\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.780538 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-host-var-lib-kubelet\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.780550 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-systemd-units\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.780561 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-slash\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.780565 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-hostroot\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.780586 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dbb015c0-3a11-48bf-a59f-22bc03ca2fb9-cnibin\") pod \"multus-additional-cni-plugins-49hht\" (UID: \"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\") " pod="openshift-multus/multus-additional-cni-plugins-49hht" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.780592 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-run-systemd\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.780609 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-run-openvswitch\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.781065 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c4b4e0e8-af33-491e-b1d1-31079d90c656-ovnkube-config\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.780712 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dbb015c0-3a11-48bf-a59f-22bc03ca2fb9-tuning-conf-dir\") pod \"multus-additional-cni-plugins-49hht\" (UID: \"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\") " pod="openshift-multus/multus-additional-cni-plugins-49hht" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.782055 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c4b4e0e8-af33-491e-b1d1-31079d90c656-ovnkube-script-lib\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.782547 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.782571 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.782663 4730 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.789652 4730 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.789693 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.789705 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.789733 4730 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.789757 4730 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.789767 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.789801 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.789814 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.790166 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.790185 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.790195 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.790203 4730 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.790213 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.790335 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.790358 4730 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.790368 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.790378 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.790392 4730 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.790405 4730 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.790418 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.790433 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.790446 4730 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.790458 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.786030 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7fcd3db3-55f1-4c23-8fa9-78844495cea3-proxy-tls\") pod \"machine-config-daemon-p5qvf\" (UID: \"7fcd3db3-55f1-4c23-8fa9-78844495cea3\") " pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.789908 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-p47zh\" (UID: \"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.783439 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c4b4e0e8-af33-491e-b1d1-31079d90c656-ovn-node-metrics-cert\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.782026 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dbb015c0-3a11-48bf-a59f-22bc03ca2fb9-system-cni-dir\") pod \"multus-additional-cni-plugins-49hht\" (UID: \"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\") " pod="openshift-multus/multus-additional-cni-plugins-49hht" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.792637 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzk8j\" (UniqueName: \"kubernetes.io/projected/7fcd3db3-55f1-4c23-8fa9-78844495cea3-kube-api-access-lzk8j\") pod \"machine-config-daemon-p5qvf\" (UID: \"7fcd3db3-55f1-4c23-8fa9-78844495cea3\") " pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.794855 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qtg2\" (UniqueName: \"kubernetes.io/projected/dbb015c0-3a11-48bf-a59f-22bc03ca2fb9-kube-api-access-4qtg2\") pod \"multus-additional-cni-plugins-49hht\" (UID: \"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\") " pod="openshift-multus/multus-additional-cni-plugins-49hht" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.795738 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz64b\" (UniqueName: \"kubernetes.io/projected/c4b4e0e8-af33-491e-b1d1-31079d90c656-kube-api-access-mz64b\") pod \"ovnkube-node-qj97f\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.796021 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.796550 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4xpw\" (UniqueName: \"kubernetes.io/projected/a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0-kube-api-access-d4xpw\") pod \"ovnkube-control-plane-749d76644c-p47zh\" (UID: \"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.799565 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fvg6\" (UniqueName: \"kubernetes.io/projected/2ee8d55f-90bd-4484-8455-933de455efea-kube-api-access-2fvg6\") pod \"node-ca-n4w74\" (UID: \"2ee8d55f-90bd-4484-8455-933de455efea\") " pod="openshift-image-registry/node-ca-n4w74" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.799648 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvthz\" (UniqueName: \"kubernetes.io/projected/6f97b1f1-1fad-44ec-8253-17dd6a5eee54-kube-api-access-vvthz\") pod \"multus-6r2kn\" (UID: \"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\") " pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.807665 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plthx\" (UniqueName: \"kubernetes.io/projected/102cb977-7291-453e-9282-20572071afee-kube-api-access-plthx\") pod \"node-resolver-69fnw\" (UID: \"102cb977-7291-453e-9282-20572071afee\") " pod="openshift-dns/node-resolver-69fnw" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.807397 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.817919 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.826093 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.827786 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.827816 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.827826 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.827839 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.827849 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:12Z","lastTransitionTime":"2026-03-20T15:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.831471 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.837655 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.839271 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.845960 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.847362 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:40:12 crc kubenswrapper[4730]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Mar 20 15:40:12 crc kubenswrapper[4730]: set -euo pipefail Mar 20 15:40:12 crc kubenswrapper[4730]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Mar 20 15:40:12 crc kubenswrapper[4730]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Mar 20 15:40:12 crc kubenswrapper[4730]: # As the secret mount is optional we must wait for the files to be present. Mar 20 15:40:12 crc kubenswrapper[4730]: # The service is created in monitor.yaml and this is created in sdn.yaml. Mar 20 15:40:12 crc kubenswrapper[4730]: TS=$(date +%s) Mar 20 15:40:12 crc kubenswrapper[4730]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Mar 20 15:40:12 crc kubenswrapper[4730]: HAS_LOGGED_INFO=0 Mar 20 15:40:12 crc kubenswrapper[4730]: Mar 20 15:40:12 crc kubenswrapper[4730]: log_missing_certs(){ Mar 20 15:40:12 crc kubenswrapper[4730]: CUR_TS=$(date +%s) Mar 20 15:40:12 crc kubenswrapper[4730]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Mar 20 15:40:12 crc kubenswrapper[4730]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Mar 20 15:40:12 crc kubenswrapper[4730]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Mar 20 15:40:12 crc kubenswrapper[4730]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Mar 20 15:40:12 crc kubenswrapper[4730]: HAS_LOGGED_INFO=1 Mar 20 15:40:12 crc kubenswrapper[4730]: fi Mar 20 15:40:12 crc kubenswrapper[4730]: } Mar 20 15:40:12 crc kubenswrapper[4730]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Mar 20 15:40:12 crc kubenswrapper[4730]: log_missing_certs Mar 20 15:40:12 crc kubenswrapper[4730]: sleep 5 Mar 20 15:40:12 crc kubenswrapper[4730]: done Mar 20 15:40:12 crc kubenswrapper[4730]: Mar 20 15:40:12 crc kubenswrapper[4730]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Mar 20 15:40:12 crc kubenswrapper[4730]: exec /usr/bin/kube-rbac-proxy \ Mar 20 15:40:12 crc kubenswrapper[4730]: --logtostderr \ Mar 20 15:40:12 crc kubenswrapper[4730]: --secure-listen-address=:9108 \ Mar 20 15:40:12 crc kubenswrapper[4730]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Mar 20 15:40:12 crc kubenswrapper[4730]: --upstream=http://127.0.0.1:29108/ \ Mar 20 15:40:12 crc kubenswrapper[4730]: --tls-private-key-file=${TLS_PK} \ Mar 20 15:40:12 crc kubenswrapper[4730]: --tls-cert-file=${TLS_CERT} Mar 20 15:40:12 crc kubenswrapper[4730]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d4xpw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-p47zh_openshift-ovn-kubernetes(a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 15:40:12 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.849826 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:40:12 crc kubenswrapper[4730]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 20 15:40:12 crc kubenswrapper[4730]: if [[ -f "/env/_master" ]]; then Mar 20 15:40:12 crc kubenswrapper[4730]: set -o allexport Mar 20 15:40:12 crc kubenswrapper[4730]: source "/env/_master" Mar 20 15:40:12 crc kubenswrapper[4730]: set +o allexport Mar 20 15:40:12 crc kubenswrapper[4730]: fi Mar 20 15:40:12 crc kubenswrapper[4730]: Mar 20 15:40:12 crc kubenswrapper[4730]: ovn_v4_join_subnet_opt= Mar 20 15:40:12 crc kubenswrapper[4730]: if [[ "" != "" ]]; then Mar 20 15:40:12 crc kubenswrapper[4730]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Mar 20 15:40:12 crc kubenswrapper[4730]: fi Mar 20 15:40:12 crc kubenswrapper[4730]: ovn_v6_join_subnet_opt= Mar 20 15:40:12 crc kubenswrapper[4730]: if [[ "" != "" ]]; then Mar 20 15:40:12 crc kubenswrapper[4730]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Mar 20 15:40:12 crc kubenswrapper[4730]: fi Mar 20 15:40:12 crc kubenswrapper[4730]: Mar 20 15:40:12 crc kubenswrapper[4730]: ovn_v4_transit_switch_subnet_opt= Mar 20 15:40:12 crc kubenswrapper[4730]: if [[ "" != "" ]]; then Mar 20 15:40:12 crc kubenswrapper[4730]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Mar 20 15:40:12 crc kubenswrapper[4730]: fi Mar 20 15:40:12 crc kubenswrapper[4730]: ovn_v6_transit_switch_subnet_opt= Mar 20 15:40:12 crc kubenswrapper[4730]: if [[ "" != "" ]]; then Mar 20 15:40:12 crc kubenswrapper[4730]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Mar 20 15:40:12 crc kubenswrapper[4730]: fi Mar 20 15:40:12 crc kubenswrapper[4730]: Mar 20 15:40:12 crc kubenswrapper[4730]: dns_name_resolver_enabled_flag= Mar 20 15:40:12 crc kubenswrapper[4730]: if [[ "false" == "true" ]]; then Mar 20 15:40:12 crc kubenswrapper[4730]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Mar 20 15:40:12 crc kubenswrapper[4730]: fi Mar 20 15:40:12 crc kubenswrapper[4730]: Mar 20 15:40:12 crc kubenswrapper[4730]: persistent_ips_enabled_flag= Mar 20 15:40:12 crc kubenswrapper[4730]: if [[ "true" == "true" ]]; then Mar 20 15:40:12 crc kubenswrapper[4730]: persistent_ips_enabled_flag="--enable-persistent-ips" Mar 20 15:40:12 crc kubenswrapper[4730]: fi Mar 20 15:40:12 crc kubenswrapper[4730]: Mar 20 15:40:12 crc kubenswrapper[4730]: # This is needed so that converting clusters from GA to TP Mar 20 15:40:12 crc kubenswrapper[4730]: # will rollout control plane pods as well Mar 20 15:40:12 crc kubenswrapper[4730]: network_segmentation_enabled_flag= Mar 20 15:40:12 crc kubenswrapper[4730]: multi_network_enabled_flag= Mar 20 15:40:12 crc kubenswrapper[4730]: if [[ "true" == "true" ]]; then Mar 20 15:40:12 crc kubenswrapper[4730]: multi_network_enabled_flag="--enable-multi-network" Mar 20 15:40:12 crc kubenswrapper[4730]: network_segmentation_enabled_flag="--enable-network-segmentation" Mar 20 15:40:12 crc kubenswrapper[4730]: fi Mar 20 15:40:12 crc kubenswrapper[4730]: Mar 20 15:40:12 crc kubenswrapper[4730]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Mar 20 15:40:12 crc kubenswrapper[4730]: exec /usr/bin/ovnkube \ Mar 20 15:40:12 crc kubenswrapper[4730]: --enable-interconnect \ Mar 20 15:40:12 crc kubenswrapper[4730]: --init-cluster-manager "${K8S_NODE}" \ Mar 20 15:40:12 crc kubenswrapper[4730]: --config-file=/run/ovnkube-config/ovnkube.conf \ Mar 20 15:40:12 crc kubenswrapper[4730]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Mar 20 15:40:12 crc kubenswrapper[4730]: --metrics-bind-address "127.0.0.1:29108" \ Mar 20 15:40:12 crc kubenswrapper[4730]: --metrics-enable-pprof \ Mar 20 15:40:12 crc kubenswrapper[4730]: --metrics-enable-config-duration \ Mar 20 15:40:12 crc kubenswrapper[4730]: ${ovn_v4_join_subnet_opt} \ Mar 20 15:40:12 crc kubenswrapper[4730]: ${ovn_v6_join_subnet_opt} \ Mar 20 15:40:12 crc kubenswrapper[4730]: ${ovn_v4_transit_switch_subnet_opt} \ Mar 20 15:40:12 crc kubenswrapper[4730]: ${ovn_v6_transit_switch_subnet_opt} \ Mar 20 15:40:12 crc kubenswrapper[4730]: ${dns_name_resolver_enabled_flag} \ Mar 20 15:40:12 crc kubenswrapper[4730]: ${persistent_ips_enabled_flag} \ Mar 20 15:40:12 crc kubenswrapper[4730]: ${multi_network_enabled_flag} \ Mar 20 15:40:12 crc kubenswrapper[4730]: ${network_segmentation_enabled_flag} Mar 20 15:40:12 crc kubenswrapper[4730]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d4xpw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-p47zh_openshift-ovn-kubernetes(a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 15:40:12 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:40:12 crc kubenswrapper[4730]: W0320 15:40:12.850350 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-0d4eaf48ca38e2c780071b4c2cd083f62f9df885d079ed59049f71027240efb4 WatchSource:0}: Error finding container 0d4eaf48ca38e2c780071b4c2cd083f62f9df885d079ed59049f71027240efb4: Status 404 returned error can't find the container with id 0d4eaf48ca38e2c780071b4c2cd083f62f9df885d079ed59049f71027240efb4 Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.850901 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" podUID="a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.854737 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:40:12 crc kubenswrapper[4730]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 20 15:40:12 crc kubenswrapper[4730]: set -o allexport Mar 20 15:40:12 crc kubenswrapper[4730]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 20 15:40:12 crc kubenswrapper[4730]: source /etc/kubernetes/apiserver-url.env Mar 20 15:40:12 crc kubenswrapper[4730]: else Mar 20 15:40:12 crc kubenswrapper[4730]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 20 15:40:12 crc kubenswrapper[4730]: exit 1 Mar 20 15:40:12 crc kubenswrapper[4730]: fi Mar 20 15:40:12 crc kubenswrapper[4730]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 20 15:40:12 crc kubenswrapper[4730]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 15:40:12 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.856008 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.856126 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.863209 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.865832 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.875480 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: W0320 15:40:12.876269 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-aaa236df0208971cf48de3f9bf06671b82fce6170fb3ad4236c9cb521096380b WatchSource:0}: Error finding container aaa236df0208971cf48de3f9bf06671b82fce6170fb3ad4236c9cb521096380b: Status 404 returned error can't find the container with id aaa236df0208971cf48de3f9bf06671b82fce6170fb3ad4236c9cb521096380b Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.876308 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.879094 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.881630 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:40:12 crc kubenswrapper[4730]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 20 15:40:12 crc kubenswrapper[4730]: if [[ -f "/env/_master" ]]; then Mar 20 15:40:12 crc kubenswrapper[4730]: set -o allexport Mar 20 15:40:12 crc kubenswrapper[4730]: source "/env/_master" Mar 20 15:40:12 crc kubenswrapper[4730]: set +o allexport Mar 20 15:40:12 crc kubenswrapper[4730]: fi Mar 20 15:40:12 crc kubenswrapper[4730]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 20 15:40:12 crc kubenswrapper[4730]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 20 15:40:12 crc kubenswrapper[4730]: ho_enable="--enable-hybrid-overlay" Mar 20 15:40:12 crc kubenswrapper[4730]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 20 15:40:12 crc kubenswrapper[4730]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 20 15:40:12 crc kubenswrapper[4730]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 20 15:40:12 crc kubenswrapper[4730]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 20 15:40:12 crc kubenswrapper[4730]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 20 15:40:12 crc kubenswrapper[4730]: --webhook-host=127.0.0.1 \ Mar 20 15:40:12 crc kubenswrapper[4730]: --webhook-port=9743 \ Mar 20 15:40:12 crc kubenswrapper[4730]: ${ho_enable} \ Mar 20 15:40:12 crc kubenswrapper[4730]: --enable-interconnect \ Mar 20 15:40:12 crc kubenswrapper[4730]: --disable-approver \ Mar 20 15:40:12 crc kubenswrapper[4730]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 20 15:40:12 crc kubenswrapper[4730]: --wait-for-kubernetes-api=200s \ Mar 20 15:40:12 crc kubenswrapper[4730]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 20 15:40:12 crc kubenswrapper[4730]: --loglevel="${LOGLEVEL}" Mar 20 15:40:12 crc kubenswrapper[4730]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 15:40:12 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.883488 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.884216 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:40:12 crc kubenswrapper[4730]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 20 15:40:12 crc kubenswrapper[4730]: if [[ -f "/env/_master" ]]; then Mar 20 15:40:12 crc kubenswrapper[4730]: set -o allexport Mar 20 15:40:12 crc kubenswrapper[4730]: source "/env/_master" Mar 20 15:40:12 crc kubenswrapper[4730]: set +o allexport Mar 20 15:40:12 crc kubenswrapper[4730]: fi Mar 20 15:40:12 crc kubenswrapper[4730]: Mar 20 15:40:12 crc kubenswrapper[4730]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 20 15:40:12 crc kubenswrapper[4730]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 20 15:40:12 crc kubenswrapper[4730]: --disable-webhook \ Mar 20 15:40:12 crc kubenswrapper[4730]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 20 15:40:12 crc kubenswrapper[4730]: --loglevel="${LOGLEVEL}" Mar 20 15:40:12 crc kubenswrapper[4730]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 15:40:12 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.885444 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.886393 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-n4w74" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.891082 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.891402 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs\") pod \"network-metrics-daemon-2prfn\" (UID: \"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\") " pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.891437 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xml\" (UniqueName: \"kubernetes.io/projected/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-kube-api-access-m7xml\") pod \"network-metrics-daemon-2prfn\" (UID: \"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\") " pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.891569 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.891633 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs podName:db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a nodeName:}" failed. No retries permitted until 2026-03-20 15:40:13.391615862 +0000 UTC m=+72.604987241 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs") pod "network-metrics-daemon-2prfn" (UID: "db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 15:40:12 crc kubenswrapper[4730]: W0320 15:40:12.896099 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-52ab0a486f857a8f6da3f2dfd99d1bad7f101147f963e683628056e2b95a1a8b WatchSource:0}: Error finding container 52ab0a486f857a8f6da3f2dfd99d1bad7f101147f963e683628056e2b95a1a8b: Status 404 returned error can't find the container with id 52ab0a486f857a8f6da3f2dfd99d1bad7f101147f963e683628056e2b95a1a8b Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.898067 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.899283 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.901559 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:12 crc kubenswrapper[4730]: W0320 15:40:12.904285 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee8d55f_90bd_4484_8455_933de455efea.slice/crio-c6cedbaeead04f0a723ee0c341b7f6751c3ad80c472b32b9905de4d6b0d54e0b WatchSource:0}: Error finding container c6cedbaeead04f0a723ee0c341b7f6751c3ad80c472b32b9905de4d6b0d54e0b: Status 404 returned error can't find the container with id c6cedbaeead04f0a723ee0c341b7f6751c3ad80c472b32b9905de4d6b0d54e0b Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.904459 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lzk8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.906469 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:40:12 crc kubenswrapper[4730]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 20 15:40:12 crc kubenswrapper[4730]: while [ true ]; Mar 20 15:40:12 crc kubenswrapper[4730]: do Mar 20 15:40:12 crc kubenswrapper[4730]: for f in $(ls /tmp/serviceca); do Mar 20 15:40:12 crc kubenswrapper[4730]: echo $f Mar 20 15:40:12 crc kubenswrapper[4730]: ca_file_path="/tmp/serviceca/${f}" Mar 20 15:40:12 crc kubenswrapper[4730]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 20 15:40:12 crc kubenswrapper[4730]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 20 15:40:12 crc kubenswrapper[4730]: if [ -e "${reg_dir_path}" ]; then Mar 20 15:40:12 crc kubenswrapper[4730]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 20 15:40:12 crc kubenswrapper[4730]: else Mar 20 15:40:12 crc kubenswrapper[4730]: mkdir $reg_dir_path Mar 20 15:40:12 crc kubenswrapper[4730]: cp $ca_file_path $reg_dir_path/ca.crt Mar 20 15:40:12 crc kubenswrapper[4730]: fi Mar 20 15:40:12 crc kubenswrapper[4730]: done Mar 20 15:40:12 crc kubenswrapper[4730]: for d in $(ls /etc/docker/certs.d); do Mar 20 15:40:12 crc kubenswrapper[4730]: echo $d Mar 20 15:40:12 crc kubenswrapper[4730]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 20 15:40:12 crc kubenswrapper[4730]: reg_conf_path="/tmp/serviceca/${dp}" Mar 20 15:40:12 crc kubenswrapper[4730]: if [ ! -e "${reg_conf_path}" ]; then Mar 20 15:40:12 crc kubenswrapper[4730]: rm -rf /etc/docker/certs.d/$d Mar 20 15:40:12 crc kubenswrapper[4730]: fi Mar 20 15:40:12 crc kubenswrapper[4730]: done Mar 20 15:40:12 crc kubenswrapper[4730]: sleep 60 & wait ${!} Mar 20 15:40:12 crc kubenswrapper[4730]: done Mar 20 15:40:12 crc kubenswrapper[4730]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2fvg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-n4w74_openshift-image-registry(2ee8d55f-90bd-4484-8455-933de455efea): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 15:40:12 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.906590 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lzk8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.908132 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-n4w74" podUID="2ee8d55f-90bd-4484-8455-933de455efea" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.908217 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.908715 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xml\" (UniqueName: \"kubernetes.io/projected/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-kube-api-access-m7xml\") pod \"network-metrics-daemon-2prfn\" (UID: \"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\") " pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.923201 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6r2kn" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.929489 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.929527 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.929536 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.929565 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.929575 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:12Z","lastTransitionTime":"2026-03-20T15:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:12 crc kubenswrapper[4730]: W0320 15:40:12.931928 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f97b1f1_1fad_44ec_8253_17dd6a5eee54.slice/crio-fc4a02f9622b344573aecaba86f050f3013a2ce63ed59201d29d31b7fdfc4c52 WatchSource:0}: Error finding container fc4a02f9622b344573aecaba86f050f3013a2ce63ed59201d29d31b7fdfc4c52: Status 404 returned error can't find the container with id fc4a02f9622b344573aecaba86f050f3013a2ce63ed59201d29d31b7fdfc4c52 Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.933638 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:40:12 crc kubenswrapper[4730]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 20 15:40:12 crc kubenswrapper[4730]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 20 15:40:12 crc kubenswrapper[4730]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vvthz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-6r2kn_openshift-multus(6f97b1f1-1fad-44ec-8253-17dd6a5eee54): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 15:40:12 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.934841 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-6r2kn" podUID="6f97b1f1-1fad-44ec-8253-17dd6a5eee54" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.942393 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.952924 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:40:12 crc kubenswrapper[4730]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 20 15:40:12 crc kubenswrapper[4730]: apiVersion: v1 Mar 20 15:40:12 crc kubenswrapper[4730]: clusters: Mar 20 15:40:12 crc kubenswrapper[4730]: - cluster: Mar 20 15:40:12 crc kubenswrapper[4730]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 20 15:40:12 crc kubenswrapper[4730]: server: https://api-int.crc.testing:6443 Mar 20 15:40:12 crc kubenswrapper[4730]: name: default-cluster Mar 20 15:40:12 crc kubenswrapper[4730]: contexts: Mar 20 15:40:12 crc kubenswrapper[4730]: - context: Mar 20 15:40:12 crc kubenswrapper[4730]: cluster: default-cluster Mar 20 15:40:12 crc kubenswrapper[4730]: namespace: default Mar 20 15:40:12 crc kubenswrapper[4730]: user: default-auth Mar 20 15:40:12 crc kubenswrapper[4730]: name: default-context Mar 20 15:40:12 crc kubenswrapper[4730]: current-context: default-context Mar 20 15:40:12 crc kubenswrapper[4730]: kind: Config Mar 20 15:40:12 crc kubenswrapper[4730]: preferences: {} Mar 20 15:40:12 crc kubenswrapper[4730]: users: Mar 20 15:40:12 crc kubenswrapper[4730]: - name: default-auth Mar 20 15:40:12 crc kubenswrapper[4730]: user: Mar 20 15:40:12 crc kubenswrapper[4730]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 20 15:40:12 crc kubenswrapper[4730]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 20 15:40:12 crc kubenswrapper[4730]: EOF Mar 20 15:40:12 crc kubenswrapper[4730]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mz64b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-qj97f_openshift-ovn-kubernetes(c4b4e0e8-af33-491e-b1d1-31079d90c656): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 15:40:12 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.954092 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.954101 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-69fnw" Mar 20 15:40:12 crc kubenswrapper[4730]: W0320 15:40:12.963545 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod102cb977_7291_453e_9282_20572071afee.slice/crio-e6f0997dd31e8e344f7ef497d6d54e32ae4978ad19519923f1968f9227693b16 WatchSource:0}: Error finding container e6f0997dd31e8e344f7ef497d6d54e32ae4978ad19519923f1968f9227693b16: Status 404 returned error can't find the container with id e6f0997dd31e8e344f7ef497d6d54e32ae4978ad19519923f1968f9227693b16 Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.965232 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:40:12 crc kubenswrapper[4730]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 20 15:40:12 crc kubenswrapper[4730]: set -uo pipefail Mar 20 15:40:12 crc kubenswrapper[4730]: Mar 20 15:40:12 crc kubenswrapper[4730]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 20 15:40:12 crc kubenswrapper[4730]: Mar 20 15:40:12 crc kubenswrapper[4730]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 20 15:40:12 crc kubenswrapper[4730]: HOSTS_FILE="/etc/hosts" Mar 20 15:40:12 crc kubenswrapper[4730]: TEMP_FILE="/etc/hosts.tmp" Mar 20 15:40:12 crc kubenswrapper[4730]: Mar 20 15:40:12 crc kubenswrapper[4730]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 20 15:40:12 crc kubenswrapper[4730]: Mar 20 15:40:12 crc kubenswrapper[4730]: # Make a temporary file with the old hosts file's attributes. Mar 20 15:40:12 crc kubenswrapper[4730]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 20 15:40:12 crc kubenswrapper[4730]: echo "Failed to preserve hosts file. Exiting." Mar 20 15:40:12 crc kubenswrapper[4730]: exit 1 Mar 20 15:40:12 crc kubenswrapper[4730]: fi Mar 20 15:40:12 crc kubenswrapper[4730]: Mar 20 15:40:12 crc kubenswrapper[4730]: while true; do Mar 20 15:40:12 crc kubenswrapper[4730]: declare -A svc_ips Mar 20 15:40:12 crc kubenswrapper[4730]: for svc in "${services[@]}"; do Mar 20 15:40:12 crc kubenswrapper[4730]: # Fetch service IP from cluster dns if present. We make several tries Mar 20 15:40:12 crc kubenswrapper[4730]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 20 15:40:12 crc kubenswrapper[4730]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 20 15:40:12 crc kubenswrapper[4730]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 20 15:40:12 crc kubenswrapper[4730]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 15:40:12 crc kubenswrapper[4730]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 15:40:12 crc kubenswrapper[4730]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 15:40:12 crc kubenswrapper[4730]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 20 15:40:12 crc kubenswrapper[4730]: for i in ${!cmds[*]} Mar 20 15:40:12 crc kubenswrapper[4730]: do Mar 20 15:40:12 crc kubenswrapper[4730]: ips=($(eval "${cmds[i]}")) Mar 20 15:40:12 crc kubenswrapper[4730]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 20 15:40:12 crc kubenswrapper[4730]: svc_ips["${svc}"]="${ips[@]}" Mar 20 15:40:12 crc kubenswrapper[4730]: break Mar 20 15:40:12 crc kubenswrapper[4730]: fi Mar 20 15:40:12 crc kubenswrapper[4730]: done Mar 20 15:40:12 crc kubenswrapper[4730]: done Mar 20 15:40:12 crc kubenswrapper[4730]: Mar 20 15:40:12 crc kubenswrapper[4730]: # Update /etc/hosts only if we get valid service IPs Mar 20 15:40:12 crc kubenswrapper[4730]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 20 15:40:12 crc kubenswrapper[4730]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 20 15:40:12 crc kubenswrapper[4730]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 20 15:40:12 crc kubenswrapper[4730]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 20 15:40:12 crc kubenswrapper[4730]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 20 15:40:12 crc kubenswrapper[4730]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 20 15:40:12 crc kubenswrapper[4730]: sleep 60 & wait Mar 20 15:40:12 crc kubenswrapper[4730]: continue Mar 20 15:40:12 crc kubenswrapper[4730]: fi Mar 20 15:40:12 crc kubenswrapper[4730]: Mar 20 15:40:12 crc kubenswrapper[4730]: # Append resolver entries for services Mar 20 15:40:12 crc kubenswrapper[4730]: rc=0 Mar 20 15:40:12 crc kubenswrapper[4730]: for svc in "${!svc_ips[@]}"; do Mar 20 15:40:12 crc kubenswrapper[4730]: for ip in ${svc_ips[${svc}]}; do Mar 20 15:40:12 crc kubenswrapper[4730]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 20 15:40:12 crc kubenswrapper[4730]: done Mar 20 15:40:12 crc kubenswrapper[4730]: done Mar 20 15:40:12 crc kubenswrapper[4730]: if [[ $rc -ne 0 ]]; then Mar 20 15:40:12 crc kubenswrapper[4730]: sleep 60 & wait Mar 20 15:40:12 crc kubenswrapper[4730]: continue Mar 20 15:40:12 crc kubenswrapper[4730]: fi Mar 20 15:40:12 crc kubenswrapper[4730]: Mar 20 15:40:12 crc kubenswrapper[4730]: Mar 20 15:40:12 crc kubenswrapper[4730]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 20 15:40:12 crc kubenswrapper[4730]: # Replace /etc/hosts with our modified version if needed Mar 20 15:40:12 crc kubenswrapper[4730]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 20 15:40:12 crc kubenswrapper[4730]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 20 15:40:12 crc kubenswrapper[4730]: fi Mar 20 15:40:12 crc kubenswrapper[4730]: sleep 60 & wait Mar 20 15:40:12 crc kubenswrapper[4730]: unset svc_ips Mar 20 15:40:12 crc kubenswrapper[4730]: done Mar 20 15:40:12 crc kubenswrapper[4730]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plthx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-69fnw_openshift-dns(102cb977-7291-453e-9282-20572071afee): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 15:40:12 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.966518 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-69fnw" podUID="102cb977-7291-453e-9282-20572071afee" Mar 20 15:40:12 crc kubenswrapper[4730]: I0320 15:40:12.979964 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-49hht" Mar 20 15:40:12 crc kubenswrapper[4730]: W0320 15:40:12.989978 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbb015c0_3a11_48bf_a59f_22bc03ca2fb9.slice/crio-f1d093488113ecc8c7f47886f76617cbd92ef51fd6c2516efe3977f71ad7a69a WatchSource:0}: Error finding container f1d093488113ecc8c7f47886f76617cbd92ef51fd6c2516efe3977f71ad7a69a: Status 404 returned error can't find the container with id f1d093488113ecc8c7f47886f76617cbd92ef51fd6c2516efe3977f71ad7a69a Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.992085 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4qtg2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-49hht_openshift-multus(dbb015c0-3a11-48bf-a59f-22bc03ca2fb9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 15:40:12 crc kubenswrapper[4730]: E0320 15:40:12.993225 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-49hht" podUID="dbb015c0-3a11-48bf-a59f-22bc03ca2fb9" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.031988 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.032023 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.032035 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.032049 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.032058 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:13Z","lastTransitionTime":"2026-03-20T15:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.134615 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.134662 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.134674 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.134692 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.134707 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:13Z","lastTransitionTime":"2026-03-20T15:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.194356 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.194538 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:40:14.194501502 +0000 UTC m=+73.407872911 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.194613 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.194696 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.194763 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.194849 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.194861 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.195049 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.195093 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.195107 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.195140 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 15:40:14.195112178 +0000 UTC m=+73.408483587 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.195002 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.195228 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.195304 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.194930 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.195182 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 15:40:14.19516327 +0000 UTC m=+73.408534679 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.195431 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 15:40:14.195403516 +0000 UTC m=+73.408774935 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.195477 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 15:40:14.195460288 +0000 UTC m=+73.408831707 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.237190 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.237282 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.237295 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.237313 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.237325 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:13Z","lastTransitionTime":"2026-03-20T15:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.339882 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.339941 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.339953 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.339971 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.339985 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:13Z","lastTransitionTime":"2026-03-20T15:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.397048 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs\") pod \"network-metrics-daemon-2prfn\" (UID: \"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\") " pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.397177 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.397268 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs podName:db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a nodeName:}" failed. No retries permitted until 2026-03-20 15:40:14.39723355 +0000 UTC m=+73.610604929 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs") pod "network-metrics-daemon-2prfn" (UID: "db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.441881 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.441919 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.441931 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.441951 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.441963 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:13Z","lastTransitionTime":"2026-03-20T15:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.536573 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.537084 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.538399 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.539021 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.539981 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.540500 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.541030 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.541893 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.542523 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.543530 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.543850 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.543917 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.543937 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.543961 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.543980 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:13Z","lastTransitionTime":"2026-03-20T15:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.544040 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.545157 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.545742 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.546314 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.547620 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.548127 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.549177 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.549636 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.550263 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.551464 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.551897 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.552910 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.553340 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.554330 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.554730 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.555350 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.556572 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.557121 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.558202 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.558821 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.559885 4730 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.560010 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.561884 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.562934 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.563501 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.564927 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.565833 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.566711 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.567369 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.568730 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.569278 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.570285 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.570928 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.571889 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.572396 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.573299 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.573794 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.574952 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.575434 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.576511 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.576947 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.577860 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.578460 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.578906 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.646839 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.646899 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.646913 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.646930 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.646946 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:13Z","lastTransitionTime":"2026-03-20T15:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.748731 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.748763 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.748774 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.748793 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.748805 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:13Z","lastTransitionTime":"2026-03-20T15:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.832847 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" event={"ID":"c4b4e0e8-af33-491e-b1d1-31079d90c656","Type":"ContainerStarted","Data":"f0bb8a04718d250ff389e424bacc9dc0320526af93827c03eb732b797d1a25fb"} Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.833909 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6r2kn" event={"ID":"6f97b1f1-1fad-44ec-8253-17dd6a5eee54","Type":"ContainerStarted","Data":"fc4a02f9622b344573aecaba86f050f3013a2ce63ed59201d29d31b7fdfc4c52"} Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.834771 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"aaa236df0208971cf48de3f9bf06671b82fce6170fb3ad4236c9cb521096380b"} Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.835021 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:40:13 crc kubenswrapper[4730]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 20 15:40:13 crc kubenswrapper[4730]: apiVersion: v1 Mar 20 15:40:13 crc kubenswrapper[4730]: clusters: Mar 20 15:40:13 crc kubenswrapper[4730]: - cluster: Mar 20 15:40:13 crc kubenswrapper[4730]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 20 15:40:13 crc kubenswrapper[4730]: server: https://api-int.crc.testing:6443 Mar 20 15:40:13 crc kubenswrapper[4730]: name: default-cluster Mar 20 15:40:13 crc kubenswrapper[4730]: contexts: Mar 20 15:40:13 crc kubenswrapper[4730]: - context: Mar 20 15:40:13 crc kubenswrapper[4730]: cluster: default-cluster Mar 20 15:40:13 crc kubenswrapper[4730]: namespace: default Mar 20 15:40:13 crc kubenswrapper[4730]: user: default-auth Mar 20 15:40:13 crc kubenswrapper[4730]: name: default-context Mar 20 15:40:13 crc kubenswrapper[4730]: current-context: default-context Mar 20 15:40:13 crc kubenswrapper[4730]: kind: Config Mar 20 15:40:13 crc kubenswrapper[4730]: preferences: {} Mar 20 15:40:13 crc kubenswrapper[4730]: users: Mar 20 15:40:13 crc kubenswrapper[4730]: - name: default-auth Mar 20 15:40:13 crc kubenswrapper[4730]: user: Mar 20 15:40:13 crc kubenswrapper[4730]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 20 15:40:13 crc kubenswrapper[4730]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 20 15:40:13 crc kubenswrapper[4730]: EOF Mar 20 15:40:13 crc kubenswrapper[4730]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mz64b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-qj97f_openshift-ovn-kubernetes(c4b4e0e8-af33-491e-b1d1-31079d90c656): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 15:40:13 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.835903 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:40:13 crc kubenswrapper[4730]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 20 15:40:13 crc kubenswrapper[4730]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 20 15:40:13 crc kubenswrapper[4730]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vvthz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-6r2kn_openshift-multus(6f97b1f1-1fad-44ec-8253-17dd6a5eee54): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 15:40:13 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.835999 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:40:13 crc kubenswrapper[4730]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 20 15:40:13 crc kubenswrapper[4730]: if [[ -f "/env/_master" ]]; then Mar 20 15:40:13 crc kubenswrapper[4730]: set -o allexport Mar 20 15:40:13 crc kubenswrapper[4730]: source "/env/_master" Mar 20 15:40:13 crc kubenswrapper[4730]: set +o allexport Mar 20 15:40:13 crc kubenswrapper[4730]: fi Mar 20 15:40:13 crc kubenswrapper[4730]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 20 15:40:13 crc kubenswrapper[4730]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 20 15:40:13 crc kubenswrapper[4730]: ho_enable="--enable-hybrid-overlay" Mar 20 15:40:13 crc kubenswrapper[4730]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 20 15:40:13 crc kubenswrapper[4730]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 20 15:40:13 crc kubenswrapper[4730]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 20 15:40:13 crc kubenswrapper[4730]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 20 15:40:13 crc kubenswrapper[4730]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 20 15:40:13 crc kubenswrapper[4730]: --webhook-host=127.0.0.1 \ Mar 20 15:40:13 crc kubenswrapper[4730]: --webhook-port=9743 \ Mar 20 15:40:13 crc kubenswrapper[4730]: ${ho_enable} \ Mar 20 15:40:13 crc kubenswrapper[4730]: --enable-interconnect \ Mar 20 15:40:13 crc kubenswrapper[4730]: --disable-approver \ Mar 20 15:40:13 crc kubenswrapper[4730]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 20 15:40:13 crc kubenswrapper[4730]: --wait-for-kubernetes-api=200s \ Mar 20 15:40:13 crc kubenswrapper[4730]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 20 15:40:13 crc kubenswrapper[4730]: --loglevel="${LOGLEVEL}" Mar 20 15:40:13 crc kubenswrapper[4730]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 15:40:13 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.836098 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.836356 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-69fnw" event={"ID":"102cb977-7291-453e-9282-20572071afee","Type":"ContainerStarted","Data":"e6f0997dd31e8e344f7ef497d6d54e32ae4978ad19519923f1968f9227693b16"} Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.837133 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"0d4eaf48ca38e2c780071b4c2cd083f62f9df885d079ed59049f71027240efb4"} Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.837183 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-6r2kn" podUID="6f97b1f1-1fad-44ec-8253-17dd6a5eee54" Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.838157 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:40:13 crc kubenswrapper[4730]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 20 15:40:13 crc kubenswrapper[4730]: set -uo pipefail Mar 20 15:40:13 crc kubenswrapper[4730]: Mar 20 15:40:13 crc kubenswrapper[4730]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 20 15:40:13 crc kubenswrapper[4730]: Mar 20 15:40:13 crc kubenswrapper[4730]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 20 15:40:13 crc kubenswrapper[4730]: HOSTS_FILE="/etc/hosts" Mar 20 15:40:13 crc kubenswrapper[4730]: TEMP_FILE="/etc/hosts.tmp" Mar 20 15:40:13 crc kubenswrapper[4730]: Mar 20 15:40:13 crc kubenswrapper[4730]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 20 15:40:13 crc kubenswrapper[4730]: Mar 20 15:40:13 crc kubenswrapper[4730]: # Make a temporary file with the old hosts file's attributes. Mar 20 15:40:13 crc kubenswrapper[4730]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 20 15:40:13 crc kubenswrapper[4730]: echo "Failed to preserve hosts file. Exiting." Mar 20 15:40:13 crc kubenswrapper[4730]: exit 1 Mar 20 15:40:13 crc kubenswrapper[4730]: fi Mar 20 15:40:13 crc kubenswrapper[4730]: Mar 20 15:40:13 crc kubenswrapper[4730]: while true; do Mar 20 15:40:13 crc kubenswrapper[4730]: declare -A svc_ips Mar 20 15:40:13 crc kubenswrapper[4730]: for svc in "${services[@]}"; do Mar 20 15:40:13 crc kubenswrapper[4730]: # Fetch service IP from cluster dns if present. We make several tries Mar 20 15:40:13 crc kubenswrapper[4730]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 20 15:40:13 crc kubenswrapper[4730]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 20 15:40:13 crc kubenswrapper[4730]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 20 15:40:13 crc kubenswrapper[4730]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 15:40:13 crc kubenswrapper[4730]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 15:40:13 crc kubenswrapper[4730]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 15:40:13 crc kubenswrapper[4730]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 20 15:40:13 crc kubenswrapper[4730]: for i in ${!cmds[*]} Mar 20 15:40:13 crc kubenswrapper[4730]: do Mar 20 15:40:13 crc kubenswrapper[4730]: ips=($(eval "${cmds[i]}")) Mar 20 15:40:13 crc kubenswrapper[4730]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 20 15:40:13 crc kubenswrapper[4730]: svc_ips["${svc}"]="${ips[@]}" Mar 20 15:40:13 crc kubenswrapper[4730]: break Mar 20 15:40:13 crc kubenswrapper[4730]: fi Mar 20 15:40:13 crc kubenswrapper[4730]: done Mar 20 15:40:13 crc kubenswrapper[4730]: done Mar 20 15:40:13 crc kubenswrapper[4730]: Mar 20 15:40:13 crc kubenswrapper[4730]: # Update /etc/hosts only if we get valid service IPs Mar 20 15:40:13 crc kubenswrapper[4730]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 20 15:40:13 crc kubenswrapper[4730]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 20 15:40:13 crc kubenswrapper[4730]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 20 15:40:13 crc kubenswrapper[4730]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 20 15:40:13 crc kubenswrapper[4730]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 20 15:40:13 crc kubenswrapper[4730]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 20 15:40:13 crc kubenswrapper[4730]: sleep 60 & wait Mar 20 15:40:13 crc kubenswrapper[4730]: continue Mar 20 15:40:13 crc kubenswrapper[4730]: fi Mar 20 15:40:13 crc kubenswrapper[4730]: Mar 20 15:40:13 crc kubenswrapper[4730]: # Append resolver entries for services Mar 20 15:40:13 crc kubenswrapper[4730]: rc=0 Mar 20 15:40:13 crc kubenswrapper[4730]: for svc in "${!svc_ips[@]}"; do Mar 20 15:40:13 crc kubenswrapper[4730]: for ip in ${svc_ips[${svc}]}; do Mar 20 15:40:13 crc kubenswrapper[4730]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 20 15:40:13 crc kubenswrapper[4730]: done Mar 20 15:40:13 crc kubenswrapper[4730]: done Mar 20 15:40:13 crc kubenswrapper[4730]: if [[ $rc -ne 0 ]]; then Mar 20 15:40:13 crc kubenswrapper[4730]: sleep 60 & wait Mar 20 15:40:13 crc kubenswrapper[4730]: continue Mar 20 15:40:13 crc kubenswrapper[4730]: fi Mar 20 15:40:13 crc kubenswrapper[4730]: Mar 20 15:40:13 crc kubenswrapper[4730]: Mar 20 15:40:13 crc kubenswrapper[4730]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 20 15:40:13 crc kubenswrapper[4730]: # Replace /etc/hosts with our modified version if needed Mar 20 15:40:13 crc kubenswrapper[4730]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 20 15:40:13 crc kubenswrapper[4730]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 20 15:40:13 crc kubenswrapper[4730]: fi Mar 20 15:40:13 crc kubenswrapper[4730]: sleep 60 & wait Mar 20 15:40:13 crc kubenswrapper[4730]: unset svc_ips Mar 20 15:40:13 crc kubenswrapper[4730]: done Mar 20 15:40:13 crc kubenswrapper[4730]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plthx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-69fnw_openshift-dns(102cb977-7291-453e-9282-20572071afee): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 15:40:13 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.838163 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" event={"ID":"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9","Type":"ContainerStarted","Data":"f1d093488113ecc8c7f47886f76617cbd92ef51fd6c2516efe3977f71ad7a69a"} Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.838381 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:40:13 crc kubenswrapper[4730]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 20 15:40:13 crc kubenswrapper[4730]: set -o allexport Mar 20 15:40:13 crc kubenswrapper[4730]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 20 15:40:13 crc kubenswrapper[4730]: source /etc/kubernetes/apiserver-url.env Mar 20 15:40:13 crc kubenswrapper[4730]: else Mar 20 15:40:13 crc kubenswrapper[4730]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 20 15:40:13 crc kubenswrapper[4730]: exit 1 Mar 20 15:40:13 crc kubenswrapper[4730]: fi Mar 20 15:40:13 crc kubenswrapper[4730]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 20 15:40:13 crc kubenswrapper[4730]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 15:40:13 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.838655 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:40:13 crc kubenswrapper[4730]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 20 15:40:13 crc kubenswrapper[4730]: if [[ -f "/env/_master" ]]; then Mar 20 15:40:13 crc kubenswrapper[4730]: set -o allexport Mar 20 15:40:13 crc kubenswrapper[4730]: source "/env/_master" Mar 20 15:40:13 crc kubenswrapper[4730]: set +o allexport Mar 20 15:40:13 crc kubenswrapper[4730]: fi Mar 20 15:40:13 crc kubenswrapper[4730]: Mar 20 15:40:13 crc kubenswrapper[4730]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 20 15:40:13 crc kubenswrapper[4730]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 20 15:40:13 crc kubenswrapper[4730]: --disable-webhook \ Mar 20 15:40:13 crc kubenswrapper[4730]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 20 15:40:13 crc kubenswrapper[4730]: --loglevel="${LOGLEVEL}" Mar 20 15:40:13 crc kubenswrapper[4730]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 15:40:13 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.841261 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" event={"ID":"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0","Type":"ContainerStarted","Data":"c6e84e70aeec61f3c5e26afb92d8d59eb7be5dbcce5dc7207bc638470927d8d6"} Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.842441 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-n4w74" event={"ID":"2ee8d55f-90bd-4484-8455-933de455efea","Type":"ContainerStarted","Data":"c6cedbaeead04f0a723ee0c341b7f6751c3ad80c472b32b9905de4d6b0d54e0b"} Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.842556 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerStarted","Data":"a0784fbf7fdba6b3f1633c4eeb3bee20b81376e6456ebe1c5ae165fcca0c2e9e"} Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.842471 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.842489 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-69fnw" podUID="102cb977-7291-453e-9282-20572071afee" Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.842448 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.842583 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4qtg2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-49hht_openshift-multus(dbb015c0-3a11-48bf-a59f-22bc03ca2fb9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.842646 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"52ab0a486f857a8f6da3f2dfd99d1bad7f101147f963e683628056e2b95a1a8b"} Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.843697 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:40:13 crc kubenswrapper[4730]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 20 15:40:13 crc kubenswrapper[4730]: while [ true ]; Mar 20 15:40:13 crc kubenswrapper[4730]: do Mar 20 15:40:13 crc kubenswrapper[4730]: for f in $(ls /tmp/serviceca); do Mar 20 15:40:13 crc kubenswrapper[4730]: echo $f Mar 20 15:40:13 crc kubenswrapper[4730]: ca_file_path="/tmp/serviceca/${f}" Mar 20 15:40:13 crc kubenswrapper[4730]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 20 15:40:13 crc kubenswrapper[4730]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 20 15:40:13 crc kubenswrapper[4730]: if [ -e "${reg_dir_path}" ]; then Mar 20 15:40:13 crc kubenswrapper[4730]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 20 15:40:13 crc kubenswrapper[4730]: else Mar 20 15:40:13 crc kubenswrapper[4730]: mkdir $reg_dir_path Mar 20 15:40:13 crc kubenswrapper[4730]: cp $ca_file_path $reg_dir_path/ca.crt Mar 20 15:40:13 crc kubenswrapper[4730]: fi Mar 20 15:40:13 crc kubenswrapper[4730]: done Mar 20 15:40:13 crc kubenswrapper[4730]: for d in $(ls /etc/docker/certs.d); do Mar 20 15:40:13 crc kubenswrapper[4730]: echo $d Mar 20 15:40:13 crc kubenswrapper[4730]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 20 15:40:13 crc kubenswrapper[4730]: reg_conf_path="/tmp/serviceca/${dp}" Mar 20 15:40:13 crc kubenswrapper[4730]: if [ ! -e "${reg_conf_path}" ]; then Mar 20 15:40:13 crc kubenswrapper[4730]: rm -rf /etc/docker/certs.d/$d Mar 20 15:40:13 crc kubenswrapper[4730]: fi Mar 20 15:40:13 crc kubenswrapper[4730]: done Mar 20 15:40:13 crc kubenswrapper[4730]: sleep 60 & wait ${!} Mar 20 15:40:13 crc kubenswrapper[4730]: done Mar 20 15:40:13 crc kubenswrapper[4730]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2fvg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-n4w74_openshift-image-registry(2ee8d55f-90bd-4484-8455-933de455efea): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 15:40:13 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.843735 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-49hht" podUID="dbb015c0-3a11-48bf-a59f-22bc03ca2fb9" Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.843975 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:40:13 crc kubenswrapper[4730]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Mar 20 15:40:13 crc kubenswrapper[4730]: set -euo pipefail Mar 20 15:40:13 crc kubenswrapper[4730]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Mar 20 15:40:13 crc kubenswrapper[4730]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Mar 20 15:40:13 crc kubenswrapper[4730]: # As the secret mount is optional we must wait for the files to be present. Mar 20 15:40:13 crc kubenswrapper[4730]: # The service is created in monitor.yaml and this is created in sdn.yaml. Mar 20 15:40:13 crc kubenswrapper[4730]: TS=$(date +%s) Mar 20 15:40:13 crc kubenswrapper[4730]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Mar 20 15:40:13 crc kubenswrapper[4730]: HAS_LOGGED_INFO=0 Mar 20 15:40:13 crc kubenswrapper[4730]: Mar 20 15:40:13 crc kubenswrapper[4730]: log_missing_certs(){ Mar 20 15:40:13 crc kubenswrapper[4730]: CUR_TS=$(date +%s) Mar 20 15:40:13 crc kubenswrapper[4730]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Mar 20 15:40:13 crc kubenswrapper[4730]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Mar 20 15:40:13 crc kubenswrapper[4730]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Mar 20 15:40:13 crc kubenswrapper[4730]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Mar 20 15:40:13 crc kubenswrapper[4730]: HAS_LOGGED_INFO=1 Mar 20 15:40:13 crc kubenswrapper[4730]: fi Mar 20 15:40:13 crc kubenswrapper[4730]: } Mar 20 15:40:13 crc kubenswrapper[4730]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Mar 20 15:40:13 crc kubenswrapper[4730]: log_missing_certs Mar 20 15:40:13 crc kubenswrapper[4730]: sleep 5 Mar 20 15:40:13 crc kubenswrapper[4730]: done Mar 20 15:40:13 crc kubenswrapper[4730]: Mar 20 15:40:13 crc kubenswrapper[4730]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Mar 20 15:40:13 crc kubenswrapper[4730]: exec /usr/bin/kube-rbac-proxy \ Mar 20 15:40:13 crc kubenswrapper[4730]: --logtostderr \ Mar 20 15:40:13 crc kubenswrapper[4730]: --secure-listen-address=:9108 \ Mar 20 15:40:13 crc kubenswrapper[4730]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Mar 20 15:40:13 crc kubenswrapper[4730]: --upstream=http://127.0.0.1:29108/ \ Mar 20 15:40:13 crc kubenswrapper[4730]: --tls-private-key-file=${TLS_PK} \ Mar 20 15:40:13 crc kubenswrapper[4730]: --tls-cert-file=${TLS_CERT} Mar 20 15:40:13 crc kubenswrapper[4730]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d4xpw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-p47zh_openshift-ovn-kubernetes(a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 15:40:13 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.844439 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.845150 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-n4w74" podUID="2ee8d55f-90bd-4484-8455-933de455efea" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.845350 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.845971 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lzk8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.846019 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.846324 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:40:13 crc kubenswrapper[4730]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 20 15:40:13 crc kubenswrapper[4730]: if [[ -f "/env/_master" ]]; then Mar 20 15:40:13 crc kubenswrapper[4730]: set -o allexport Mar 20 15:40:13 crc kubenswrapper[4730]: source "/env/_master" Mar 20 15:40:13 crc kubenswrapper[4730]: set +o allexport Mar 20 15:40:13 crc kubenswrapper[4730]: fi Mar 20 15:40:13 crc kubenswrapper[4730]: Mar 20 15:40:13 crc kubenswrapper[4730]: ovn_v4_join_subnet_opt= Mar 20 15:40:13 crc kubenswrapper[4730]: if [[ "" != "" ]]; then Mar 20 15:40:13 crc kubenswrapper[4730]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Mar 20 15:40:13 crc kubenswrapper[4730]: fi Mar 20 15:40:13 crc kubenswrapper[4730]: ovn_v6_join_subnet_opt= Mar 20 15:40:13 crc kubenswrapper[4730]: if [[ "" != "" ]]; then Mar 20 15:40:13 crc kubenswrapper[4730]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Mar 20 15:40:13 crc kubenswrapper[4730]: fi Mar 20 15:40:13 crc kubenswrapper[4730]: Mar 20 15:40:13 crc kubenswrapper[4730]: ovn_v4_transit_switch_subnet_opt= Mar 20 15:40:13 crc kubenswrapper[4730]: if [[ "" != "" ]]; then Mar 20 15:40:13 crc kubenswrapper[4730]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Mar 20 15:40:13 crc kubenswrapper[4730]: fi Mar 20 15:40:13 crc kubenswrapper[4730]: ovn_v6_transit_switch_subnet_opt= Mar 20 15:40:13 crc kubenswrapper[4730]: if [[ "" != "" ]]; then Mar 20 15:40:13 crc kubenswrapper[4730]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Mar 20 15:40:13 crc kubenswrapper[4730]: fi Mar 20 15:40:13 crc kubenswrapper[4730]: Mar 20 15:40:13 crc kubenswrapper[4730]: dns_name_resolver_enabled_flag= Mar 20 15:40:13 crc kubenswrapper[4730]: if [[ "false" == "true" ]]; then Mar 20 15:40:13 crc kubenswrapper[4730]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Mar 20 15:40:13 crc kubenswrapper[4730]: fi Mar 20 15:40:13 crc kubenswrapper[4730]: Mar 20 15:40:13 crc kubenswrapper[4730]: persistent_ips_enabled_flag= Mar 20 15:40:13 crc kubenswrapper[4730]: if [[ "true" == "true" ]]; then Mar 20 15:40:13 crc kubenswrapper[4730]: persistent_ips_enabled_flag="--enable-persistent-ips" Mar 20 15:40:13 crc kubenswrapper[4730]: fi Mar 20 15:40:13 crc kubenswrapper[4730]: Mar 20 15:40:13 crc kubenswrapper[4730]: # This is needed so that converting clusters from GA to TP Mar 20 15:40:13 crc kubenswrapper[4730]: # will rollout control plane pods as well Mar 20 15:40:13 crc kubenswrapper[4730]: network_segmentation_enabled_flag= Mar 20 15:40:13 crc kubenswrapper[4730]: multi_network_enabled_flag= Mar 20 15:40:13 crc kubenswrapper[4730]: if [[ "true" == "true" ]]; then Mar 20 15:40:13 crc kubenswrapper[4730]: multi_network_enabled_flag="--enable-multi-network" Mar 20 15:40:13 crc kubenswrapper[4730]: network_segmentation_enabled_flag="--enable-network-segmentation" Mar 20 15:40:13 crc kubenswrapper[4730]: fi Mar 20 15:40:13 crc kubenswrapper[4730]: Mar 20 15:40:13 crc kubenswrapper[4730]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Mar 20 15:40:13 crc kubenswrapper[4730]: exec /usr/bin/ovnkube \ Mar 20 15:40:13 crc kubenswrapper[4730]: --enable-interconnect \ Mar 20 15:40:13 crc kubenswrapper[4730]: --init-cluster-manager "${K8S_NODE}" \ Mar 20 15:40:13 crc kubenswrapper[4730]: --config-file=/run/ovnkube-config/ovnkube.conf \ Mar 20 15:40:13 crc kubenswrapper[4730]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Mar 20 15:40:13 crc kubenswrapper[4730]: --metrics-bind-address "127.0.0.1:29108" \ Mar 20 15:40:13 crc kubenswrapper[4730]: --metrics-enable-pprof \ Mar 20 15:40:13 crc kubenswrapper[4730]: --metrics-enable-config-duration \ Mar 20 15:40:13 crc kubenswrapper[4730]: ${ovn_v4_join_subnet_opt} \ Mar 20 15:40:13 crc kubenswrapper[4730]: ${ovn_v6_join_subnet_opt} \ Mar 20 15:40:13 crc kubenswrapper[4730]: ${ovn_v4_transit_switch_subnet_opt} \ Mar 20 15:40:13 crc kubenswrapper[4730]: ${ovn_v6_transit_switch_subnet_opt} \ Mar 20 15:40:13 crc kubenswrapper[4730]: ${dns_name_resolver_enabled_flag} \ Mar 20 15:40:13 crc kubenswrapper[4730]: ${persistent_ips_enabled_flag} \ Mar 20 15:40:13 crc kubenswrapper[4730]: ${multi_network_enabled_flag} \ Mar 20 15:40:13 crc kubenswrapper[4730]: ${network_segmentation_enabled_flag} Mar 20 15:40:13 crc kubenswrapper[4730]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d4xpw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-p47zh_openshift-ovn-kubernetes(a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 15:40:13 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.848357 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" podUID="a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0" Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.849013 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lzk8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.849965 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.849997 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.850009 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.850025 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.850038 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:13Z","lastTransitionTime":"2026-03-20T15:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:13 crc kubenswrapper[4730]: E0320 15:40:13.850339 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.856931 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.867313 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.874868 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.884313 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.899523 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.913912 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.923469 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.929922 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.938948 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.947803 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.952592 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.952637 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.952649 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.952671 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.952684 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:13Z","lastTransitionTime":"2026-03-20T15:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.955732 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.964348 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.976576 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:13 crc kubenswrapper[4730]: I0320 15:40:13.997083 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.009709 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.020057 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.029359 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.035787 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.043409 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.051834 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.055656 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.055705 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.055722 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.055745 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.055799 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:14Z","lastTransitionTime":"2026-03-20T15:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.062346 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.075806 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.084357 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.093136 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.105153 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.113016 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.121855 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.158725 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.158770 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.158784 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.158803 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.158818 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:14Z","lastTransitionTime":"2026-03-20T15:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.207720 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.207827 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:14 crc kubenswrapper[4730]: E0320 15:40:14.207847 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:40:16.207829264 +0000 UTC m=+75.421200633 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.207871 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.207896 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:14 crc kubenswrapper[4730]: E0320 15:40:14.207903 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.207920 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:14 crc kubenswrapper[4730]: E0320 15:40:14.207941 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 15:40:16.207930947 +0000 UTC m=+75.421302316 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 15:40:14 crc kubenswrapper[4730]: E0320 15:40:14.208031 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 15:40:14 crc kubenswrapper[4730]: E0320 15:40:14.208048 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 15:40:14 crc kubenswrapper[4730]: E0320 15:40:14.208060 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:40:14 crc kubenswrapper[4730]: E0320 15:40:14.208076 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 15:40:14 crc kubenswrapper[4730]: E0320 15:40:14.208094 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 15:40:14 crc kubenswrapper[4730]: E0320 15:40:14.208129 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 15:40:14 crc kubenswrapper[4730]: E0320 15:40:14.208141 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:40:14 crc kubenswrapper[4730]: E0320 15:40:14.208098 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 15:40:16.208088321 +0000 UTC m=+75.421459690 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:40:14 crc kubenswrapper[4730]: E0320 15:40:14.208176 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 15:40:16.208157413 +0000 UTC m=+75.421528872 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 15:40:14 crc kubenswrapper[4730]: E0320 15:40:14.208191 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 15:40:16.208183284 +0000 UTC m=+75.421554773 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.260659 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.260874 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.260958 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.261057 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.261140 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:14Z","lastTransitionTime":"2026-03-20T15:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.364510 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.364547 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.364557 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.364572 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.364582 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:14Z","lastTransitionTime":"2026-03-20T15:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.409768 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs\") pod \"network-metrics-daemon-2prfn\" (UID: \"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\") " pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:14 crc kubenswrapper[4730]: E0320 15:40:14.409982 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 15:40:14 crc kubenswrapper[4730]: E0320 15:40:14.410065 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs podName:db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a nodeName:}" failed. No retries permitted until 2026-03-20 15:40:16.410042628 +0000 UTC m=+75.623414087 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs") pod "network-metrics-daemon-2prfn" (UID: "db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.466561 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.466610 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.466623 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.466642 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.466651 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:14Z","lastTransitionTime":"2026-03-20T15:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.533445 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.533479 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:14 crc kubenswrapper[4730]: E0320 15:40:14.533795 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.533538 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.533508 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:14 crc kubenswrapper[4730]: E0320 15:40:14.534138 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:40:14 crc kubenswrapper[4730]: E0320 15:40:14.534044 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:40:14 crc kubenswrapper[4730]: E0320 15:40:14.533950 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.546901 4730 scope.go:117] "RemoveContainer" containerID="688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5" Mar 20 15:40:14 crc kubenswrapper[4730]: E0320 15:40:14.547108 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.547392 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.569657 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.569709 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.569723 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.569741 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.569754 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:14Z","lastTransitionTime":"2026-03-20T15:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.671621 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.672178 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.672278 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.672375 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.672474 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:14Z","lastTransitionTime":"2026-03-20T15:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.774227 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.774277 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.774290 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.774303 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.774312 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:14Z","lastTransitionTime":"2026-03-20T15:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.844617 4730 scope.go:117] "RemoveContainer" containerID="688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5" Mar 20 15:40:14 crc kubenswrapper[4730]: E0320 15:40:14.844771 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.876812 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.876849 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.876859 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.876871 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.876881 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:14Z","lastTransitionTime":"2026-03-20T15:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.979709 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.980041 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.980172 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.980313 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:14 crc kubenswrapper[4730]: I0320 15:40:14.980418 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:14Z","lastTransitionTime":"2026-03-20T15:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.083152 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.083198 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.083209 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.083225 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.083236 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:15Z","lastTransitionTime":"2026-03-20T15:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.185529 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.185621 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.185655 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.185677 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.185688 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:15Z","lastTransitionTime":"2026-03-20T15:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.287954 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.288011 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.288024 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.288050 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.288062 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:15Z","lastTransitionTime":"2026-03-20T15:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.390262 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.390483 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.390586 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.390675 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.390764 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:15Z","lastTransitionTime":"2026-03-20T15:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.493853 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.493906 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.493919 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.493939 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.493959 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:15Z","lastTransitionTime":"2026-03-20T15:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.596182 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.596231 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.596263 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.596281 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.596292 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:15Z","lastTransitionTime":"2026-03-20T15:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.698805 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.699423 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.699545 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.699645 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.699823 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:15Z","lastTransitionTime":"2026-03-20T15:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.803537 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.803868 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.803996 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.804126 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.804242 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:15Z","lastTransitionTime":"2026-03-20T15:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.907497 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.908105 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.908395 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.908619 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:15 crc kubenswrapper[4730]: I0320 15:40:15.908814 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:15Z","lastTransitionTime":"2026-03-20T15:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.011911 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.011950 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.011959 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.011974 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.011983 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:16Z","lastTransitionTime":"2026-03-20T15:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.115547 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.115603 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.115620 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.115642 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.115658 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:16Z","lastTransitionTime":"2026-03-20T15:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.219312 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.219439 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.219460 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.219545 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.219575 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:16Z","lastTransitionTime":"2026-03-20T15:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.231677 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:40:16 crc kubenswrapper[4730]: E0320 15:40:16.231886 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:40:20.231856215 +0000 UTC m=+79.445227614 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.231935 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.232008 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.232059 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.232114 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:16 crc kubenswrapper[4730]: E0320 15:40:16.232325 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 15:40:16 crc kubenswrapper[4730]: E0320 15:40:16.232349 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 15:40:16 crc kubenswrapper[4730]: E0320 15:40:16.232367 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:40:16 crc kubenswrapper[4730]: E0320 15:40:16.232418 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 15:40:20.23240374 +0000 UTC m=+79.445775139 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:40:16 crc kubenswrapper[4730]: E0320 15:40:16.232537 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 15:40:16 crc kubenswrapper[4730]: E0320 15:40:16.232612 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 15:40:20.232590825 +0000 UTC m=+79.445962224 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 15:40:16 crc kubenswrapper[4730]: E0320 15:40:16.232620 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 15:40:16 crc kubenswrapper[4730]: E0320 15:40:16.232641 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 15:40:16 crc kubenswrapper[4730]: E0320 15:40:16.232682 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 15:40:16 crc kubenswrapper[4730]: E0320 15:40:16.232701 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:40:16 crc kubenswrapper[4730]: E0320 15:40:16.232737 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 15:40:20.232700838 +0000 UTC m=+79.446072247 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 15:40:16 crc kubenswrapper[4730]: E0320 15:40:16.232777 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 15:40:20.23275198 +0000 UTC m=+79.446123379 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.322440 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.322526 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.322544 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.322599 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.322615 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:16Z","lastTransitionTime":"2026-03-20T15:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.425418 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.425733 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.425875 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.426034 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.426137 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:16Z","lastTransitionTime":"2026-03-20T15:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.434151 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs\") pod \"network-metrics-daemon-2prfn\" (UID: \"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\") " pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:16 crc kubenswrapper[4730]: E0320 15:40:16.434373 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 15:40:16 crc kubenswrapper[4730]: E0320 15:40:16.434674 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs podName:db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a nodeName:}" failed. No retries permitted until 2026-03-20 15:40:20.434641685 +0000 UTC m=+79.648013094 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs") pod "network-metrics-daemon-2prfn" (UID: "db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.528983 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.529219 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.529296 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.529390 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.529464 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:16Z","lastTransitionTime":"2026-03-20T15:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.532529 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.532581 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:16 crc kubenswrapper[4730]: E0320 15:40:16.532667 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.532730 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.532953 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:16 crc kubenswrapper[4730]: E0320 15:40:16.532944 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:40:16 crc kubenswrapper[4730]: E0320 15:40:16.533064 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:40:16 crc kubenswrapper[4730]: E0320 15:40:16.534008 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.633164 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.633227 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.633279 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.633309 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.633330 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:16Z","lastTransitionTime":"2026-03-20T15:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.736665 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.736724 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.736744 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.736767 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.736786 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:16Z","lastTransitionTime":"2026-03-20T15:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.839490 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.839569 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.839584 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.839606 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.839620 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:16Z","lastTransitionTime":"2026-03-20T15:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.942653 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.942712 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.942746 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.942776 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:16 crc kubenswrapper[4730]: I0320 15:40:16.942798 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:16Z","lastTransitionTime":"2026-03-20T15:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.046294 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.046365 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.046381 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.046406 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.046424 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:17Z","lastTransitionTime":"2026-03-20T15:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.149701 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.149777 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.149802 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.149835 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.149858 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:17Z","lastTransitionTime":"2026-03-20T15:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.252961 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.253008 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.253044 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.253068 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.253080 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:17Z","lastTransitionTime":"2026-03-20T15:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.356013 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.356093 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.356116 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.356140 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.356156 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:17Z","lastTransitionTime":"2026-03-20T15:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.459718 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.459768 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.459785 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.459809 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.459830 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:17Z","lastTransitionTime":"2026-03-20T15:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.562707 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.562765 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.562783 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.562808 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.562830 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:17Z","lastTransitionTime":"2026-03-20T15:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.666316 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.666381 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.666405 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.666437 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.666461 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:17Z","lastTransitionTime":"2026-03-20T15:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.768658 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.768685 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.768694 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.768705 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.768715 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:17Z","lastTransitionTime":"2026-03-20T15:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.872346 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.872420 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.872440 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.872467 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.872486 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:17Z","lastTransitionTime":"2026-03-20T15:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.975565 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.975618 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.975649 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.975676 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:17 crc kubenswrapper[4730]: I0320 15:40:17.975694 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:17Z","lastTransitionTime":"2026-03-20T15:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.079533 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.079598 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.079621 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.079652 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.079722 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:18Z","lastTransitionTime":"2026-03-20T15:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.182480 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.182511 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.182519 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.182533 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.182543 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:18Z","lastTransitionTime":"2026-03-20T15:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.286078 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.286129 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.286141 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.286159 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.286171 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:18Z","lastTransitionTime":"2026-03-20T15:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.388887 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.388945 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.388967 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.388998 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.389023 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:18Z","lastTransitionTime":"2026-03-20T15:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.491969 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.492032 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.492050 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.492075 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.492092 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:18Z","lastTransitionTime":"2026-03-20T15:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.532963 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:18 crc kubenswrapper[4730]: E0320 15:40:18.533201 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.533483 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.533553 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:18 crc kubenswrapper[4730]: E0320 15:40:18.533755 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.533801 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:18 crc kubenswrapper[4730]: E0320 15:40:18.533879 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:40:18 crc kubenswrapper[4730]: E0320 15:40:18.533963 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.595213 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.595263 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.595272 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.595288 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.595297 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:18Z","lastTransitionTime":"2026-03-20T15:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.698179 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.698294 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.698316 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.698345 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.698363 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:18Z","lastTransitionTime":"2026-03-20T15:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.801910 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.801980 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.801999 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.802024 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.802041 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:18Z","lastTransitionTime":"2026-03-20T15:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.904404 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.904457 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.904475 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.904497 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:18 crc kubenswrapper[4730]: I0320 15:40:18.904514 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:18Z","lastTransitionTime":"2026-03-20T15:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.007427 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.007520 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.007549 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.007577 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.007600 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:19Z","lastTransitionTime":"2026-03-20T15:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.110155 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.110196 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.110208 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.110224 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.110236 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:19Z","lastTransitionTime":"2026-03-20T15:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.213670 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.213750 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.213774 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.213804 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.213830 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:19Z","lastTransitionTime":"2026-03-20T15:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.317743 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.317798 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.317813 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.317832 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.318032 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:19Z","lastTransitionTime":"2026-03-20T15:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.420817 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.421130 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.421282 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.421444 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.421566 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:19Z","lastTransitionTime":"2026-03-20T15:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.523970 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.524031 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.524056 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.524083 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.524108 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:19Z","lastTransitionTime":"2026-03-20T15:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.627117 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.627197 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.627236 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.627306 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.627332 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:19Z","lastTransitionTime":"2026-03-20T15:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.730211 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.730289 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.730299 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.730315 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.730326 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:19Z","lastTransitionTime":"2026-03-20T15:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.833176 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.833216 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.833230 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.833270 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.833286 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:19Z","lastTransitionTime":"2026-03-20T15:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.935718 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.935784 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.935804 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.935833 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:19 crc kubenswrapper[4730]: I0320 15:40:19.935898 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:19Z","lastTransitionTime":"2026-03-20T15:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.037875 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.037937 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.037955 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.037978 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.037995 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:20Z","lastTransitionTime":"2026-03-20T15:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.141834 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.142190 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.142424 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.142614 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.142787 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:20Z","lastTransitionTime":"2026-03-20T15:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.244854 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.244903 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.244922 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.244944 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.244962 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:20Z","lastTransitionTime":"2026-03-20T15:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.275781 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:40:20 crc kubenswrapper[4730]: E0320 15:40:20.276012 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:40:28.275973832 +0000 UTC m=+87.489345241 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.276282 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.276398 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.276459 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.276504 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:20 crc kubenswrapper[4730]: E0320 15:40:20.276510 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 15:40:20 crc kubenswrapper[4730]: E0320 15:40:20.276559 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 15:40:20 crc kubenswrapper[4730]: E0320 15:40:20.276581 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:40:20 crc kubenswrapper[4730]: E0320 15:40:20.276655 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 15:40:28.27663012 +0000 UTC m=+87.490001529 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:40:20 crc kubenswrapper[4730]: E0320 15:40:20.276673 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 15:40:20 crc kubenswrapper[4730]: E0320 15:40:20.276698 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 15:40:20 crc kubenswrapper[4730]: E0320 15:40:20.276720 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:40:20 crc kubenswrapper[4730]: E0320 15:40:20.276781 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 15:40:28.276763184 +0000 UTC m=+87.490134593 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:40:20 crc kubenswrapper[4730]: E0320 15:40:20.276841 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 15:40:20 crc kubenswrapper[4730]: E0320 15:40:20.276882 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 15:40:28.276870197 +0000 UTC m=+87.490241606 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 15:40:20 crc kubenswrapper[4730]: E0320 15:40:20.276954 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 15:40:20 crc kubenswrapper[4730]: E0320 15:40:20.276993 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 15:40:28.27698121 +0000 UTC m=+87.490352619 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.348279 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.348329 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.348344 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.348366 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.348382 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:20Z","lastTransitionTime":"2026-03-20T15:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.451623 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.451698 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.451720 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.451743 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.451762 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:20Z","lastTransitionTime":"2026-03-20T15:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.478675 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs\") pod \"network-metrics-daemon-2prfn\" (UID: \"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\") " pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:20 crc kubenswrapper[4730]: E0320 15:40:20.478936 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 15:40:20 crc kubenswrapper[4730]: E0320 15:40:20.479049 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs podName:db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a nodeName:}" failed. No retries permitted until 2026-03-20 15:40:28.479022129 +0000 UTC m=+87.692393538 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs") pod "network-metrics-daemon-2prfn" (UID: "db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.532150 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.532275 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:20 crc kubenswrapper[4730]: E0320 15:40:20.532430 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.532476 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:20 crc kubenswrapper[4730]: E0320 15:40:20.532578 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:40:20 crc kubenswrapper[4730]: E0320 15:40:20.532684 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.532933 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:20 crc kubenswrapper[4730]: E0320 15:40:20.533400 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.555505 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.555944 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.556410 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.556702 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.556929 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:20Z","lastTransitionTime":"2026-03-20T15:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.660659 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.660960 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.661039 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.661140 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.661217 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:20Z","lastTransitionTime":"2026-03-20T15:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.764299 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.764362 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.764382 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.764412 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.764437 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:20Z","lastTransitionTime":"2026-03-20T15:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.866768 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.866806 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.866816 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.866833 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.866843 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:20Z","lastTransitionTime":"2026-03-20T15:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.969819 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.969858 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.969870 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.969886 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:20 crc kubenswrapper[4730]: I0320 15:40:20.969898 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:20Z","lastTransitionTime":"2026-03-20T15:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.072857 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.072926 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.072945 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.072970 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.072988 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:21Z","lastTransitionTime":"2026-03-20T15:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.176112 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.176199 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.176224 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.176292 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.176311 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:21Z","lastTransitionTime":"2026-03-20T15:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.279014 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.279076 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.279093 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.279117 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.279135 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:21Z","lastTransitionTime":"2026-03-20T15:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.381211 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.381269 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.381283 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.381298 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.381307 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:21Z","lastTransitionTime":"2026-03-20T15:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.483525 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.483560 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.483576 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.483594 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.483606 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:21Z","lastTransitionTime":"2026-03-20T15:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.541820 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.553408 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.565331 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.575675 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.585739 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.585892 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.586021 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.586142 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.586354 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:21Z","lastTransitionTime":"2026-03-20T15:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.588604 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.598230 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.605010 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.616997 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.628829 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.638968 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.653182 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.663991 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.675067 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.689435 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.689499 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.689518 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.689545 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.689562 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:21Z","lastTransitionTime":"2026-03-20T15:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.689808 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.713382 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.792601 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.792628 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.792638 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.792650 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.792659 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:21Z","lastTransitionTime":"2026-03-20T15:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.894427 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.894463 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.894473 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.894489 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.894501 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:21Z","lastTransitionTime":"2026-03-20T15:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.996566 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.996607 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.996616 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.996634 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:21 crc kubenswrapper[4730]: I0320 15:40:21.996643 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:21Z","lastTransitionTime":"2026-03-20T15:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.099157 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.099478 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.099575 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.099664 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.099751 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:22Z","lastTransitionTime":"2026-03-20T15:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.202843 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.202872 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.202880 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.202894 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.202903 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:22Z","lastTransitionTime":"2026-03-20T15:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.305842 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.305873 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.305884 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.305906 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.305922 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:22Z","lastTransitionTime":"2026-03-20T15:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.408963 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.408999 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.409008 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.409022 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.409033 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:22Z","lastTransitionTime":"2026-03-20T15:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.511622 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.511682 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.511705 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.511731 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.511750 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:22Z","lastTransitionTime":"2026-03-20T15:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.533045 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.533091 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.533199 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:22 crc kubenswrapper[4730]: E0320 15:40:22.533224 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:40:22 crc kubenswrapper[4730]: E0320 15:40:22.533433 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:40:22 crc kubenswrapper[4730]: E0320 15:40:22.533545 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.533719 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:22 crc kubenswrapper[4730]: E0320 15:40:22.535971 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.539879 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.615242 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.615609 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.615787 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.615945 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.616116 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:22Z","lastTransitionTime":"2026-03-20T15:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.720229 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.720703 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.721032 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.721228 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.721451 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:22Z","lastTransitionTime":"2026-03-20T15:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.824396 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.824470 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.824489 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.824514 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.824530 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:22Z","lastTransitionTime":"2026-03-20T15:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.914129 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.914986 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.915147 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.915293 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.915395 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:22Z","lastTransitionTime":"2026-03-20T15:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:22 crc kubenswrapper[4730]: E0320 15:40:22.925609 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.930310 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.930370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.930390 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.930416 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.930438 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:22Z","lastTransitionTime":"2026-03-20T15:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:22 crc kubenswrapper[4730]: E0320 15:40:22.945046 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.949221 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.949281 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.949294 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.949311 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.949323 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:22Z","lastTransitionTime":"2026-03-20T15:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:22 crc kubenswrapper[4730]: E0320 15:40:22.958714 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.962684 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.962712 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.962725 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.962747 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.962761 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:22Z","lastTransitionTime":"2026-03-20T15:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:22 crc kubenswrapper[4730]: E0320 15:40:22.974524 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.978555 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.978598 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.978609 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.978626 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.978641 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:22Z","lastTransitionTime":"2026-03-20T15:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:22 crc kubenswrapper[4730]: E0320 15:40:22.989500 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:22 crc kubenswrapper[4730]: E0320 15:40:22.989642 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.991400 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.991461 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.991472 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.991489 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:22 crc kubenswrapper[4730]: I0320 15:40:22.991504 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:22Z","lastTransitionTime":"2026-03-20T15:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.094412 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.094496 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.094523 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.094551 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.094571 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:23Z","lastTransitionTime":"2026-03-20T15:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.197823 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.197884 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.197910 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.197938 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.197961 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:23Z","lastTransitionTime":"2026-03-20T15:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.300576 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.300609 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.300622 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.300639 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.300650 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:23Z","lastTransitionTime":"2026-03-20T15:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.403342 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.403419 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.403478 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.403505 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.403521 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:23Z","lastTransitionTime":"2026-03-20T15:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.505640 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.505677 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.505689 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.505705 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.505716 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:23Z","lastTransitionTime":"2026-03-20T15:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.607699 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.608009 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.608145 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.608330 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.608480 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:23Z","lastTransitionTime":"2026-03-20T15:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.711294 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.711330 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.711342 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.711357 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.711369 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:23Z","lastTransitionTime":"2026-03-20T15:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.813653 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.813693 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.813705 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.813722 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.813732 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:23Z","lastTransitionTime":"2026-03-20T15:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.916549 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.917043 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.917119 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.917309 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:23 crc kubenswrapper[4730]: I0320 15:40:23.917401 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:23Z","lastTransitionTime":"2026-03-20T15:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.020389 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.020447 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.020472 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.020505 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.020518 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:24Z","lastTransitionTime":"2026-03-20T15:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.123295 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.123624 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.123720 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.123795 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.123867 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:24Z","lastTransitionTime":"2026-03-20T15:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.226653 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.226719 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.226738 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.226762 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.226779 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:24Z","lastTransitionTime":"2026-03-20T15:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.329150 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.329211 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.329233 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.329291 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.329313 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:24Z","lastTransitionTime":"2026-03-20T15:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.432751 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.432820 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.432859 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.432893 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.432913 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:24Z","lastTransitionTime":"2026-03-20T15:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.532138 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:24 crc kubenswrapper[4730]: E0320 15:40:24.532566 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.532208 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:24 crc kubenswrapper[4730]: E0320 15:40:24.532879 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.532150 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:24 crc kubenswrapper[4730]: E0320 15:40:24.533238 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.532291 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:24 crc kubenswrapper[4730]: E0320 15:40:24.533603 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.535894 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.536056 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.536174 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.536310 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.536408 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:24Z","lastTransitionTime":"2026-03-20T15:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.639840 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.640177 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.640449 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.640692 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.640918 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:24Z","lastTransitionTime":"2026-03-20T15:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.743860 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.744168 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.744421 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.744669 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.744885 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:24Z","lastTransitionTime":"2026-03-20T15:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.847666 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.847707 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.847718 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.847733 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.847744 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:24Z","lastTransitionTime":"2026-03-20T15:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.950485 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.950532 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.950544 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.950564 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:24 crc kubenswrapper[4730]: I0320 15:40:24.950575 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:24Z","lastTransitionTime":"2026-03-20T15:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.053586 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.053628 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.053639 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.053654 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.053665 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:25Z","lastTransitionTime":"2026-03-20T15:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.156313 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.156355 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.156364 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.156380 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.156391 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:25Z","lastTransitionTime":"2026-03-20T15:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.258460 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.258498 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.258511 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.258526 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.258539 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:25Z","lastTransitionTime":"2026-03-20T15:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.361026 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.361087 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.361108 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.361132 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.361145 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:25Z","lastTransitionTime":"2026-03-20T15:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.463395 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.463467 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.463480 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.463495 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.463507 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:25Z","lastTransitionTime":"2026-03-20T15:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:25 crc kubenswrapper[4730]: E0320 15:40:25.534604 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:40:25 crc kubenswrapper[4730]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 20 15:40:25 crc kubenswrapper[4730]: if [[ -f "/env/_master" ]]; then Mar 20 15:40:25 crc kubenswrapper[4730]: set -o allexport Mar 20 15:40:25 crc kubenswrapper[4730]: source "/env/_master" Mar 20 15:40:25 crc kubenswrapper[4730]: set +o allexport Mar 20 15:40:25 crc kubenswrapper[4730]: fi Mar 20 15:40:25 crc kubenswrapper[4730]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 20 15:40:25 crc kubenswrapper[4730]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 20 15:40:25 crc kubenswrapper[4730]: ho_enable="--enable-hybrid-overlay" Mar 20 15:40:25 crc kubenswrapper[4730]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 20 15:40:25 crc kubenswrapper[4730]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 20 15:40:25 crc kubenswrapper[4730]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 20 15:40:25 crc kubenswrapper[4730]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 20 15:40:25 crc kubenswrapper[4730]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 20 15:40:25 crc kubenswrapper[4730]: --webhook-host=127.0.0.1 \ Mar 20 15:40:25 crc kubenswrapper[4730]: --webhook-port=9743 \ Mar 20 15:40:25 crc kubenswrapper[4730]: ${ho_enable} \ Mar 20 15:40:25 crc kubenswrapper[4730]: --enable-interconnect \ Mar 20 15:40:25 crc kubenswrapper[4730]: --disable-approver \ Mar 20 15:40:25 crc kubenswrapper[4730]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 20 15:40:25 crc kubenswrapper[4730]: --wait-for-kubernetes-api=200s \ Mar 20 15:40:25 crc kubenswrapper[4730]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 20 15:40:25 crc kubenswrapper[4730]: --loglevel="${LOGLEVEL}" Mar 20 15:40:25 crc kubenswrapper[4730]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 15:40:25 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:40:25 crc kubenswrapper[4730]: E0320 15:40:25.534943 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4qtg2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-49hht_openshift-multus(dbb015c0-3a11-48bf-a59f-22bc03ca2fb9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 15:40:25 crc kubenswrapper[4730]: E0320 15:40:25.535656 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:40:25 crc kubenswrapper[4730]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Mar 20 15:40:25 crc kubenswrapper[4730]: set -euo pipefail Mar 20 15:40:25 crc kubenswrapper[4730]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Mar 20 15:40:25 crc kubenswrapper[4730]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Mar 20 15:40:25 crc kubenswrapper[4730]: # As the secret mount is optional we must wait for the files to be present. Mar 20 15:40:25 crc kubenswrapper[4730]: # The service is created in monitor.yaml and this is created in sdn.yaml. Mar 20 15:40:25 crc kubenswrapper[4730]: TS=$(date +%s) Mar 20 15:40:25 crc kubenswrapper[4730]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Mar 20 15:40:25 crc kubenswrapper[4730]: HAS_LOGGED_INFO=0 Mar 20 15:40:25 crc kubenswrapper[4730]: Mar 20 15:40:25 crc kubenswrapper[4730]: log_missing_certs(){ Mar 20 15:40:25 crc kubenswrapper[4730]: CUR_TS=$(date +%s) Mar 20 15:40:25 crc kubenswrapper[4730]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Mar 20 15:40:25 crc kubenswrapper[4730]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Mar 20 15:40:25 crc kubenswrapper[4730]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Mar 20 15:40:25 crc kubenswrapper[4730]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Mar 20 15:40:25 crc kubenswrapper[4730]: HAS_LOGGED_INFO=1 Mar 20 15:40:25 crc kubenswrapper[4730]: fi Mar 20 15:40:25 crc kubenswrapper[4730]: } Mar 20 15:40:25 crc kubenswrapper[4730]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Mar 20 15:40:25 crc kubenswrapper[4730]: log_missing_certs Mar 20 15:40:25 crc kubenswrapper[4730]: sleep 5 Mar 20 15:40:25 crc kubenswrapper[4730]: done Mar 20 15:40:25 crc kubenswrapper[4730]: Mar 20 15:40:25 crc kubenswrapper[4730]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Mar 20 15:40:25 crc kubenswrapper[4730]: exec /usr/bin/kube-rbac-proxy \ Mar 20 15:40:25 crc kubenswrapper[4730]: --logtostderr \ Mar 20 15:40:25 crc kubenswrapper[4730]: --secure-listen-address=:9108 \ Mar 20 15:40:25 crc kubenswrapper[4730]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Mar 20 15:40:25 crc kubenswrapper[4730]: --upstream=http://127.0.0.1:29108/ \ Mar 20 15:40:25 crc kubenswrapper[4730]: --tls-private-key-file=${TLS_PK} \ Mar 20 15:40:25 crc kubenswrapper[4730]: --tls-cert-file=${TLS_CERT} Mar 20 15:40:25 crc kubenswrapper[4730]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d4xpw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-p47zh_openshift-ovn-kubernetes(a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 15:40:25 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:40:25 crc kubenswrapper[4730]: E0320 15:40:25.536056 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-49hht" podUID="dbb015c0-3a11-48bf-a59f-22bc03ca2fb9" Mar 20 15:40:25 crc kubenswrapper[4730]: E0320 15:40:25.536571 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:40:25 crc kubenswrapper[4730]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 20 15:40:25 crc kubenswrapper[4730]: if [[ -f "/env/_master" ]]; then Mar 20 15:40:25 crc kubenswrapper[4730]: set -o allexport Mar 20 15:40:25 crc kubenswrapper[4730]: source "/env/_master" Mar 20 15:40:25 crc kubenswrapper[4730]: set +o allexport Mar 20 15:40:25 crc kubenswrapper[4730]: fi Mar 20 15:40:25 crc kubenswrapper[4730]: Mar 20 15:40:25 crc kubenswrapper[4730]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 20 15:40:25 crc kubenswrapper[4730]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 20 15:40:25 crc kubenswrapper[4730]: --disable-webhook \ Mar 20 15:40:25 crc kubenswrapper[4730]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 20 15:40:25 crc kubenswrapper[4730]: --loglevel="${LOGLEVEL}" Mar 20 15:40:25 crc kubenswrapper[4730]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 15:40:25 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:40:25 crc kubenswrapper[4730]: E0320 15:40:25.537750 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 20 15:40:25 crc kubenswrapper[4730]: E0320 15:40:25.539788 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:40:25 crc kubenswrapper[4730]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 20 15:40:25 crc kubenswrapper[4730]: if [[ -f "/env/_master" ]]; then Mar 20 15:40:25 crc kubenswrapper[4730]: set -o allexport Mar 20 15:40:25 crc kubenswrapper[4730]: source "/env/_master" Mar 20 15:40:25 crc kubenswrapper[4730]: set +o allexport Mar 20 15:40:25 crc kubenswrapper[4730]: fi Mar 20 15:40:25 crc kubenswrapper[4730]: Mar 20 15:40:25 crc kubenswrapper[4730]: ovn_v4_join_subnet_opt= Mar 20 15:40:25 crc kubenswrapper[4730]: if [[ "" != "" ]]; then Mar 20 15:40:25 crc kubenswrapper[4730]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Mar 20 15:40:25 crc kubenswrapper[4730]: fi Mar 20 15:40:25 crc kubenswrapper[4730]: ovn_v6_join_subnet_opt= Mar 20 15:40:25 crc kubenswrapper[4730]: if [[ "" != "" ]]; then Mar 20 15:40:25 crc kubenswrapper[4730]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Mar 20 15:40:25 crc kubenswrapper[4730]: fi Mar 20 15:40:25 crc kubenswrapper[4730]: Mar 20 15:40:25 crc kubenswrapper[4730]: ovn_v4_transit_switch_subnet_opt= Mar 20 15:40:25 crc kubenswrapper[4730]: if [[ "" != "" ]]; then Mar 20 15:40:25 crc kubenswrapper[4730]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Mar 20 15:40:25 crc kubenswrapper[4730]: fi Mar 20 15:40:25 crc kubenswrapper[4730]: ovn_v6_transit_switch_subnet_opt= Mar 20 15:40:25 crc kubenswrapper[4730]: if [[ "" != "" ]]; then Mar 20 15:40:25 crc kubenswrapper[4730]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Mar 20 15:40:25 crc kubenswrapper[4730]: fi Mar 20 15:40:25 crc kubenswrapper[4730]: Mar 20 15:40:25 crc kubenswrapper[4730]: dns_name_resolver_enabled_flag= Mar 20 15:40:25 crc kubenswrapper[4730]: if [[ "false" == "true" ]]; then Mar 20 15:40:25 crc kubenswrapper[4730]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Mar 20 15:40:25 crc kubenswrapper[4730]: fi Mar 20 15:40:25 crc kubenswrapper[4730]: Mar 20 15:40:25 crc kubenswrapper[4730]: persistent_ips_enabled_flag= Mar 20 15:40:25 crc kubenswrapper[4730]: if [[ "true" == "true" ]]; then Mar 20 15:40:25 crc kubenswrapper[4730]: persistent_ips_enabled_flag="--enable-persistent-ips" Mar 20 15:40:25 crc kubenswrapper[4730]: fi Mar 20 15:40:25 crc kubenswrapper[4730]: Mar 20 15:40:25 crc kubenswrapper[4730]: # This is needed so that converting clusters from GA to TP Mar 20 15:40:25 crc kubenswrapper[4730]: # will rollout control plane pods as well Mar 20 15:40:25 crc kubenswrapper[4730]: network_segmentation_enabled_flag= Mar 20 15:40:25 crc kubenswrapper[4730]: multi_network_enabled_flag= Mar 20 15:40:25 crc kubenswrapper[4730]: if [[ "true" == "true" ]]; then Mar 20 15:40:25 crc kubenswrapper[4730]: multi_network_enabled_flag="--enable-multi-network" Mar 20 15:40:25 crc kubenswrapper[4730]: network_segmentation_enabled_flag="--enable-network-segmentation" Mar 20 15:40:25 crc kubenswrapper[4730]: fi Mar 20 15:40:25 crc kubenswrapper[4730]: Mar 20 15:40:25 crc kubenswrapper[4730]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Mar 20 15:40:25 crc kubenswrapper[4730]: exec /usr/bin/ovnkube \ Mar 20 15:40:25 crc kubenswrapper[4730]: --enable-interconnect \ Mar 20 15:40:25 crc kubenswrapper[4730]: --init-cluster-manager "${K8S_NODE}" \ Mar 20 15:40:25 crc kubenswrapper[4730]: --config-file=/run/ovnkube-config/ovnkube.conf \ Mar 20 15:40:25 crc kubenswrapper[4730]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Mar 20 15:40:25 crc kubenswrapper[4730]: --metrics-bind-address "127.0.0.1:29108" \ Mar 20 15:40:25 crc kubenswrapper[4730]: --metrics-enable-pprof \ Mar 20 15:40:25 crc kubenswrapper[4730]: --metrics-enable-config-duration \ Mar 20 15:40:25 crc kubenswrapper[4730]: ${ovn_v4_join_subnet_opt} \ Mar 20 15:40:25 crc kubenswrapper[4730]: ${ovn_v6_join_subnet_opt} \ Mar 20 15:40:25 crc kubenswrapper[4730]: ${ovn_v4_transit_switch_subnet_opt} \ Mar 20 15:40:25 crc kubenswrapper[4730]: ${ovn_v6_transit_switch_subnet_opt} \ Mar 20 15:40:25 crc kubenswrapper[4730]: ${dns_name_resolver_enabled_flag} \ Mar 20 15:40:25 crc kubenswrapper[4730]: ${persistent_ips_enabled_flag} \ Mar 20 15:40:25 crc kubenswrapper[4730]: ${multi_network_enabled_flag} \ Mar 20 15:40:25 crc kubenswrapper[4730]: ${network_segmentation_enabled_flag} Mar 20 15:40:25 crc kubenswrapper[4730]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d4xpw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-p47zh_openshift-ovn-kubernetes(a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 15:40:25 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:40:25 crc kubenswrapper[4730]: E0320 15:40:25.541196 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" podUID="a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.566444 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.566488 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.566510 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.566526 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.566536 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:25Z","lastTransitionTime":"2026-03-20T15:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.668613 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.668851 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.669012 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.669240 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.669435 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:25Z","lastTransitionTime":"2026-03-20T15:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.780056 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.780088 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.780099 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.780113 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.780482 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:25Z","lastTransitionTime":"2026-03-20T15:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.882422 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.882472 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.882487 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.882503 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.882513 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:25Z","lastTransitionTime":"2026-03-20T15:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.984431 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.984918 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.985006 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.985088 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:25 crc kubenswrapper[4730]: I0320 15:40:25.985166 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:25Z","lastTransitionTime":"2026-03-20T15:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.087778 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.087804 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.087816 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.087829 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.087837 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:26Z","lastTransitionTime":"2026-03-20T15:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.190822 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.191099 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.191217 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.191323 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.191436 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:26Z","lastTransitionTime":"2026-03-20T15:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.294504 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.294573 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.294594 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.294619 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.294646 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:26Z","lastTransitionTime":"2026-03-20T15:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.397241 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.397314 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.397324 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.397338 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.397347 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:26Z","lastTransitionTime":"2026-03-20T15:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.507677 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.508136 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.508293 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.508434 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.508568 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:26Z","lastTransitionTime":"2026-03-20T15:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.533441 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.533465 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.533628 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.533793 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:26 crc kubenswrapper[4730]: E0320 15:40:26.534007 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:40:26 crc kubenswrapper[4730]: E0320 15:40:26.534183 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:40:26 crc kubenswrapper[4730]: E0320 15:40:26.534286 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:40:26 crc kubenswrapper[4730]: E0320 15:40:26.534382 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:40:26 crc kubenswrapper[4730]: E0320 15:40:26.536584 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 15:40:26 crc kubenswrapper[4730]: E0320 15:40:26.537967 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:40:26 crc kubenswrapper[4730]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 20 15:40:26 crc kubenswrapper[4730]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 20 15:40:26 crc kubenswrapper[4730]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vvthz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-6r2kn_openshift-multus(6f97b1f1-1fad-44ec-8253-17dd6a5eee54): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 15:40:26 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:40:26 crc kubenswrapper[4730]: E0320 15:40:26.538272 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 20 15:40:26 crc kubenswrapper[4730]: E0320 15:40:26.539452 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-6r2kn" podUID="6f97b1f1-1fad-44ec-8253-17dd6a5eee54" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.612496 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.612554 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.612573 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.612599 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.612617 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:26Z","lastTransitionTime":"2026-03-20T15:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.716390 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.716438 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.716453 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.716476 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.716486 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:26Z","lastTransitionTime":"2026-03-20T15:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.820338 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.820440 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.820459 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.820810 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.821140 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:26Z","lastTransitionTime":"2026-03-20T15:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.923978 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.924042 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.924054 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.924074 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:26 crc kubenswrapper[4730]: I0320 15:40:26.924087 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:26Z","lastTransitionTime":"2026-03-20T15:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.026559 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.026626 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.026650 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.026680 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.026703 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:27Z","lastTransitionTime":"2026-03-20T15:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.129695 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.129729 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.129738 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.129752 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.129762 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:27Z","lastTransitionTime":"2026-03-20T15:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.233594 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.233833 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.234080 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.234299 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.234418 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:27Z","lastTransitionTime":"2026-03-20T15:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.337086 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.337159 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.337184 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.337217 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.337243 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:27Z","lastTransitionTime":"2026-03-20T15:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.440233 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.440319 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.440331 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.440372 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.440384 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:27Z","lastTransitionTime":"2026-03-20T15:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:27 crc kubenswrapper[4730]: E0320 15:40:27.534935 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:40:27 crc kubenswrapper[4730]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 20 15:40:27 crc kubenswrapper[4730]: set -o allexport Mar 20 15:40:27 crc kubenswrapper[4730]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 20 15:40:27 crc kubenswrapper[4730]: source /etc/kubernetes/apiserver-url.env Mar 20 15:40:27 crc kubenswrapper[4730]: else Mar 20 15:40:27 crc kubenswrapper[4730]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 20 15:40:27 crc kubenswrapper[4730]: exit 1 Mar 20 15:40:27 crc kubenswrapper[4730]: fi Mar 20 15:40:27 crc kubenswrapper[4730]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 20 15:40:27 crc kubenswrapper[4730]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 15:40:27 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:40:27 crc kubenswrapper[4730]: E0320 15:40:27.535074 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:40:27 crc kubenswrapper[4730]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 20 15:40:27 crc kubenswrapper[4730]: set -uo pipefail Mar 20 15:40:27 crc kubenswrapper[4730]: Mar 20 15:40:27 crc kubenswrapper[4730]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 20 15:40:27 crc kubenswrapper[4730]: Mar 20 15:40:27 crc kubenswrapper[4730]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 20 15:40:27 crc kubenswrapper[4730]: HOSTS_FILE="/etc/hosts" Mar 20 15:40:27 crc kubenswrapper[4730]: TEMP_FILE="/etc/hosts.tmp" Mar 20 15:40:27 crc kubenswrapper[4730]: Mar 20 15:40:27 crc kubenswrapper[4730]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 20 15:40:27 crc kubenswrapper[4730]: Mar 20 15:40:27 crc kubenswrapper[4730]: # Make a temporary file with the old hosts file's attributes. Mar 20 15:40:27 crc kubenswrapper[4730]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 20 15:40:27 crc kubenswrapper[4730]: echo "Failed to preserve hosts file. Exiting." Mar 20 15:40:27 crc kubenswrapper[4730]: exit 1 Mar 20 15:40:27 crc kubenswrapper[4730]: fi Mar 20 15:40:27 crc kubenswrapper[4730]: Mar 20 15:40:27 crc kubenswrapper[4730]: while true; do Mar 20 15:40:27 crc kubenswrapper[4730]: declare -A svc_ips Mar 20 15:40:27 crc kubenswrapper[4730]: for svc in "${services[@]}"; do Mar 20 15:40:27 crc kubenswrapper[4730]: # Fetch service IP from cluster dns if present. We make several tries Mar 20 15:40:27 crc kubenswrapper[4730]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 20 15:40:27 crc kubenswrapper[4730]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 20 15:40:27 crc kubenswrapper[4730]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 20 15:40:27 crc kubenswrapper[4730]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 15:40:27 crc kubenswrapper[4730]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 15:40:27 crc kubenswrapper[4730]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 15:40:27 crc kubenswrapper[4730]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 20 15:40:27 crc kubenswrapper[4730]: for i in ${!cmds[*]} Mar 20 15:40:27 crc kubenswrapper[4730]: do Mar 20 15:40:27 crc kubenswrapper[4730]: ips=($(eval "${cmds[i]}")) Mar 20 15:40:27 crc kubenswrapper[4730]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 20 15:40:27 crc kubenswrapper[4730]: svc_ips["${svc}"]="${ips[@]}" Mar 20 15:40:27 crc kubenswrapper[4730]: break Mar 20 15:40:27 crc kubenswrapper[4730]: fi Mar 20 15:40:27 crc kubenswrapper[4730]: done Mar 20 15:40:27 crc kubenswrapper[4730]: done Mar 20 15:40:27 crc kubenswrapper[4730]: Mar 20 15:40:27 crc kubenswrapper[4730]: # Update /etc/hosts only if we get valid service IPs Mar 20 15:40:27 crc kubenswrapper[4730]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 20 15:40:27 crc kubenswrapper[4730]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 20 15:40:27 crc kubenswrapper[4730]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 20 15:40:27 crc kubenswrapper[4730]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 20 15:40:27 crc kubenswrapper[4730]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 20 15:40:27 crc kubenswrapper[4730]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 20 15:40:27 crc kubenswrapper[4730]: sleep 60 & wait Mar 20 15:40:27 crc kubenswrapper[4730]: continue Mar 20 15:40:27 crc kubenswrapper[4730]: fi Mar 20 15:40:27 crc kubenswrapper[4730]: Mar 20 15:40:27 crc kubenswrapper[4730]: # Append resolver entries for services Mar 20 15:40:27 crc kubenswrapper[4730]: rc=0 Mar 20 15:40:27 crc kubenswrapper[4730]: for svc in "${!svc_ips[@]}"; do Mar 20 15:40:27 crc kubenswrapper[4730]: for ip in ${svc_ips[${svc}]}; do Mar 20 15:40:27 crc kubenswrapper[4730]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 20 15:40:27 crc kubenswrapper[4730]: done Mar 20 15:40:27 crc kubenswrapper[4730]: done Mar 20 15:40:27 crc kubenswrapper[4730]: if [[ $rc -ne 0 ]]; then Mar 20 15:40:27 crc kubenswrapper[4730]: sleep 60 & wait Mar 20 15:40:27 crc kubenswrapper[4730]: continue Mar 20 15:40:27 crc kubenswrapper[4730]: fi Mar 20 15:40:27 crc kubenswrapper[4730]: Mar 20 15:40:27 crc kubenswrapper[4730]: Mar 20 15:40:27 crc kubenswrapper[4730]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 20 15:40:27 crc kubenswrapper[4730]: # Replace /etc/hosts with our modified version if needed Mar 20 15:40:27 crc kubenswrapper[4730]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 20 15:40:27 crc kubenswrapper[4730]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 20 15:40:27 crc kubenswrapper[4730]: fi Mar 20 15:40:27 crc kubenswrapper[4730]: sleep 60 & wait Mar 20 15:40:27 crc kubenswrapper[4730]: unset svc_ips Mar 20 15:40:27 crc kubenswrapper[4730]: done Mar 20 15:40:27 crc kubenswrapper[4730]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plthx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-69fnw_openshift-dns(102cb977-7291-453e-9282-20572071afee): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 15:40:27 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:40:27 crc kubenswrapper[4730]: E0320 15:40:27.535459 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:40:27 crc kubenswrapper[4730]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 20 15:40:27 crc kubenswrapper[4730]: while [ true ]; Mar 20 15:40:27 crc kubenswrapper[4730]: do Mar 20 15:40:27 crc kubenswrapper[4730]: for f in $(ls /tmp/serviceca); do Mar 20 15:40:27 crc kubenswrapper[4730]: echo $f Mar 20 15:40:27 crc kubenswrapper[4730]: ca_file_path="/tmp/serviceca/${f}" Mar 20 15:40:27 crc kubenswrapper[4730]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 20 15:40:27 crc kubenswrapper[4730]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 20 15:40:27 crc kubenswrapper[4730]: if [ -e "${reg_dir_path}" ]; then Mar 20 15:40:27 crc kubenswrapper[4730]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 20 15:40:27 crc kubenswrapper[4730]: else Mar 20 15:40:27 crc kubenswrapper[4730]: mkdir $reg_dir_path Mar 20 15:40:27 crc kubenswrapper[4730]: cp $ca_file_path $reg_dir_path/ca.crt Mar 20 15:40:27 crc kubenswrapper[4730]: fi Mar 20 15:40:27 crc kubenswrapper[4730]: done Mar 20 15:40:27 crc kubenswrapper[4730]: for d in $(ls /etc/docker/certs.d); do Mar 20 15:40:27 crc kubenswrapper[4730]: echo $d Mar 20 15:40:27 crc kubenswrapper[4730]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 20 15:40:27 crc kubenswrapper[4730]: reg_conf_path="/tmp/serviceca/${dp}" Mar 20 15:40:27 crc kubenswrapper[4730]: if [ ! -e "${reg_conf_path}" ]; then Mar 20 15:40:27 crc kubenswrapper[4730]: rm -rf /etc/docker/certs.d/$d Mar 20 15:40:27 crc kubenswrapper[4730]: fi Mar 20 15:40:27 crc kubenswrapper[4730]: done Mar 20 15:40:27 crc kubenswrapper[4730]: sleep 60 & wait ${!} Mar 20 15:40:27 crc kubenswrapper[4730]: done Mar 20 15:40:27 crc kubenswrapper[4730]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2fvg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-n4w74_openshift-image-registry(2ee8d55f-90bd-4484-8455-933de455efea): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 15:40:27 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:40:27 crc kubenswrapper[4730]: E0320 15:40:27.536467 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-69fnw" podUID="102cb977-7291-453e-9282-20572071afee" Mar 20 15:40:27 crc kubenswrapper[4730]: E0320 15:40:27.536523 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-n4w74" podUID="2ee8d55f-90bd-4484-8455-933de455efea" Mar 20 15:40:27 crc kubenswrapper[4730]: E0320 15:40:27.536502 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.541844 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.541886 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.541898 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.541914 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.541927 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:27Z","lastTransitionTime":"2026-03-20T15:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.644431 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.644468 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.644477 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.644492 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.644502 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:27Z","lastTransitionTime":"2026-03-20T15:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.747418 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.747467 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.747483 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.747505 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.747521 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:27Z","lastTransitionTime":"2026-03-20T15:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.850830 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.850874 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.850890 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.850910 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.850925 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:27Z","lastTransitionTime":"2026-03-20T15:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.953548 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.953622 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.953642 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.953665 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:27 crc kubenswrapper[4730]: I0320 15:40:27.953682 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:27Z","lastTransitionTime":"2026-03-20T15:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.056632 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.057222 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.057310 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.057374 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.057445 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:28Z","lastTransitionTime":"2026-03-20T15:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.154106 4730 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.159898 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.159952 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.159966 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.159984 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.159996 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:28Z","lastTransitionTime":"2026-03-20T15:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.261964 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.262309 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.262322 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.262335 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.262343 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:28Z","lastTransitionTime":"2026-03-20T15:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.364950 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.365000 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.365018 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.365044 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.365062 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:28Z","lastTransitionTime":"2026-03-20T15:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.368432 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.368555 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.368613 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.368659 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.368725 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:28 crc kubenswrapper[4730]: E0320 15:40:28.368842 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 15:40:28 crc kubenswrapper[4730]: E0320 15:40:28.368916 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 15:40:44.368894895 +0000 UTC m=+103.582266304 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 15:40:28 crc kubenswrapper[4730]: E0320 15:40:28.369494 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:40:44.36946794 +0000 UTC m=+103.582839359 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:40:28 crc kubenswrapper[4730]: E0320 15:40:28.369639 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 15:40:28 crc kubenswrapper[4730]: E0320 15:40:28.369724 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 15:40:44.369702207 +0000 UTC m=+103.583073616 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 15:40:28 crc kubenswrapper[4730]: E0320 15:40:28.369862 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 15:40:28 crc kubenswrapper[4730]: E0320 15:40:28.369909 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 15:40:28 crc kubenswrapper[4730]: E0320 15:40:28.369936 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:40:28 crc kubenswrapper[4730]: E0320 15:40:28.369997 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 15:40:44.369980634 +0000 UTC m=+103.583352043 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:40:28 crc kubenswrapper[4730]: E0320 15:40:28.370120 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 15:40:28 crc kubenswrapper[4730]: E0320 15:40:28.370156 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 15:40:28 crc kubenswrapper[4730]: E0320 15:40:28.370177 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:40:28 crc kubenswrapper[4730]: E0320 15:40:28.370235 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 15:40:44.370215851 +0000 UTC m=+103.583587270 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.467861 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.468336 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.468548 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.468723 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.468896 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:28Z","lastTransitionTime":"2026-03-20T15:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.532639 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.532876 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.532945 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:28 crc kubenswrapper[4730]: E0320 15:40:28.533123 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.533263 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:28 crc kubenswrapper[4730]: E0320 15:40:28.534597 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lzk8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 15:40:28 crc kubenswrapper[4730]: E0320 15:40:28.534705 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:40:28 crc kubenswrapper[4730]: E0320 15:40:28.534734 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:40:28 crc kubenswrapper[4730]: E0320 15:40:28.535054 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.535086 4730 scope.go:117] "RemoveContainer" containerID="688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5" Mar 20 15:40:28 crc kubenswrapper[4730]: E0320 15:40:28.535445 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 15:40:28 crc kubenswrapper[4730]: E0320 15:40:28.536743 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:40:28 crc kubenswrapper[4730]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 20 15:40:28 crc kubenswrapper[4730]: apiVersion: v1 Mar 20 15:40:28 crc kubenswrapper[4730]: clusters: Mar 20 15:40:28 crc kubenswrapper[4730]: - cluster: Mar 20 15:40:28 crc kubenswrapper[4730]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 20 15:40:28 crc kubenswrapper[4730]: server: https://api-int.crc.testing:6443 Mar 20 15:40:28 crc kubenswrapper[4730]: name: default-cluster Mar 20 15:40:28 crc kubenswrapper[4730]: contexts: Mar 20 15:40:28 crc kubenswrapper[4730]: - context: Mar 20 15:40:28 crc kubenswrapper[4730]: cluster: default-cluster Mar 20 15:40:28 crc kubenswrapper[4730]: namespace: default Mar 20 15:40:28 crc kubenswrapper[4730]: user: default-auth Mar 20 15:40:28 crc kubenswrapper[4730]: name: default-context Mar 20 15:40:28 crc kubenswrapper[4730]: current-context: default-context Mar 20 15:40:28 crc kubenswrapper[4730]: kind: Config Mar 20 15:40:28 crc kubenswrapper[4730]: preferences: {} Mar 20 15:40:28 crc kubenswrapper[4730]: users: Mar 20 15:40:28 crc kubenswrapper[4730]: - name: default-auth Mar 20 15:40:28 crc kubenswrapper[4730]: user: Mar 20 15:40:28 crc kubenswrapper[4730]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 20 15:40:28 crc kubenswrapper[4730]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 20 15:40:28 crc kubenswrapper[4730]: EOF Mar 20 15:40:28 crc kubenswrapper[4730]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mz64b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-qj97f_openshift-ovn-kubernetes(c4b4e0e8-af33-491e-b1d1-31079d90c656): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 15:40:28 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:40:28 crc kubenswrapper[4730]: E0320 15:40:28.537647 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lzk8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 15:40:28 crc kubenswrapper[4730]: E0320 15:40:28.538781 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 15:40:28 crc kubenswrapper[4730]: E0320 15:40:28.538792 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.570845 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs\") pod \"network-metrics-daemon-2prfn\" (UID: \"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\") " pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:28 crc kubenswrapper[4730]: E0320 15:40:28.571530 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.571946 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.571977 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.571986 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:28 crc kubenswrapper[4730]: E0320 15:40:28.571952 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs podName:db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a nodeName:}" failed. No retries permitted until 2026-03-20 15:40:44.57187206 +0000 UTC m=+103.785243489 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs") pod "network-metrics-daemon-2prfn" (UID: "db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.572001 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.572012 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:28Z","lastTransitionTime":"2026-03-20T15:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.674114 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.674159 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.674176 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.674201 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.674220 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:28Z","lastTransitionTime":"2026-03-20T15:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.776911 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.776960 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.776977 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.776998 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.777015 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:28Z","lastTransitionTime":"2026-03-20T15:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.878414 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.878443 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.878452 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.878464 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.878472 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:28Z","lastTransitionTime":"2026-03-20T15:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.980235 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.980280 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.980288 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.980301 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:28 crc kubenswrapper[4730]: I0320 15:40:28.980309 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:28Z","lastTransitionTime":"2026-03-20T15:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.082831 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.082857 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.082866 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.082879 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.082887 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:29Z","lastTransitionTime":"2026-03-20T15:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.185670 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.185711 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.185722 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.185738 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.185750 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:29Z","lastTransitionTime":"2026-03-20T15:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.288207 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.288291 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.288311 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.288337 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.288355 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:29Z","lastTransitionTime":"2026-03-20T15:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.391533 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.391588 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.391598 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.391615 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.391627 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:29Z","lastTransitionTime":"2026-03-20T15:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.495529 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.495633 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.495663 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.495701 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.495726 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:29Z","lastTransitionTime":"2026-03-20T15:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.598583 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.598635 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.598648 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.598671 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.598686 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:29Z","lastTransitionTime":"2026-03-20T15:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.701432 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.701489 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.701501 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.701520 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.701532 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:29Z","lastTransitionTime":"2026-03-20T15:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.804591 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.804731 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.804749 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.804773 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.804791 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:29Z","lastTransitionTime":"2026-03-20T15:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.908576 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.908865 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.908932 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.909010 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:29 crc kubenswrapper[4730]: I0320 15:40:29.909076 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:29Z","lastTransitionTime":"2026-03-20T15:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.012466 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.012496 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.012526 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.012541 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.012553 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:30Z","lastTransitionTime":"2026-03-20T15:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.115658 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.115694 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.115705 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.116414 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.116626 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:30Z","lastTransitionTime":"2026-03-20T15:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.218818 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.219079 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.219295 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.219522 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.219732 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:30Z","lastTransitionTime":"2026-03-20T15:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.322197 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.322518 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.322669 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.322774 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.322846 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:30Z","lastTransitionTime":"2026-03-20T15:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.426224 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.426291 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.426305 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.426326 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.426337 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:30Z","lastTransitionTime":"2026-03-20T15:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.528978 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.529022 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.529031 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.529047 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.529057 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:30Z","lastTransitionTime":"2026-03-20T15:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.532676 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:30 crc kubenswrapper[4730]: E0320 15:40:30.532773 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.532863 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.532912 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:30 crc kubenswrapper[4730]: E0320 15:40:30.532971 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:40:30 crc kubenswrapper[4730]: E0320 15:40:30.533763 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.534012 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:30 crc kubenswrapper[4730]: E0320 15:40:30.534447 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.632765 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.632838 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.632856 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.632875 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.632903 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:30Z","lastTransitionTime":"2026-03-20T15:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.736289 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.736346 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.736363 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.736384 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.736401 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:30Z","lastTransitionTime":"2026-03-20T15:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.839814 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.839866 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.839881 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.839905 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.839924 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:30Z","lastTransitionTime":"2026-03-20T15:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.942626 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.942690 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.942710 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.942761 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:30 crc kubenswrapper[4730]: I0320 15:40:30.942780 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:30Z","lastTransitionTime":"2026-03-20T15:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.046280 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.046626 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.046763 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.046910 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.047045 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:31Z","lastTransitionTime":"2026-03-20T15:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.149647 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.149706 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.149716 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.149728 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.149736 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:31Z","lastTransitionTime":"2026-03-20T15:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.252690 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.252738 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.252747 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.252761 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.252771 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:31Z","lastTransitionTime":"2026-03-20T15:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.354970 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.355003 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.355013 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.355027 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.355053 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:31Z","lastTransitionTime":"2026-03-20T15:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.457310 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.457354 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.457366 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.457383 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.457393 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:31Z","lastTransitionTime":"2026-03-20T15:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.546833 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.557417 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.560455 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.560705 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.560859 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.561004 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.561130 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:31Z","lastTransitionTime":"2026-03-20T15:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.568436 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.583095 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.592350 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.606195 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.617180 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.632903 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.645500 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.655148 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.662694 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.662743 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.662755 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.662774 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.662787 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:31Z","lastTransitionTime":"2026-03-20T15:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.666206 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.676773 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.684165 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.694091 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.704376 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.722130 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.765418 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.765462 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.765474 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.765490 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.765502 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:31Z","lastTransitionTime":"2026-03-20T15:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.868330 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.868390 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.868410 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.868436 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.868453 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:31Z","lastTransitionTime":"2026-03-20T15:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.970674 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.970712 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.970723 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.970741 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:31 crc kubenswrapper[4730]: I0320 15:40:31.970753 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:31Z","lastTransitionTime":"2026-03-20T15:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.072699 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.072758 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.072772 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.072789 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.072800 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:32Z","lastTransitionTime":"2026-03-20T15:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.175183 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.175237 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.175281 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.175301 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.175314 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:32Z","lastTransitionTime":"2026-03-20T15:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.277634 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.277684 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.277697 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.277717 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.277729 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:32Z","lastTransitionTime":"2026-03-20T15:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.380297 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.380344 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.380356 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.380373 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.380385 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:32Z","lastTransitionTime":"2026-03-20T15:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.483146 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.483186 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.483195 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.483209 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.483218 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:32Z","lastTransitionTime":"2026-03-20T15:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.532969 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.533079 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.533110 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.533158 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:32 crc kubenswrapper[4730]: E0320 15:40:32.533232 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:40:32 crc kubenswrapper[4730]: E0320 15:40:32.533451 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:40:32 crc kubenswrapper[4730]: E0320 15:40:32.533811 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:40:32 crc kubenswrapper[4730]: E0320 15:40:32.533719 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.586103 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.586187 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.586215 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.586286 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.586306 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:32Z","lastTransitionTime":"2026-03-20T15:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.688417 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.688487 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.688509 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.688573 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.688599 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:32Z","lastTransitionTime":"2026-03-20T15:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.790869 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.790971 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.791012 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.791040 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.791059 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:32Z","lastTransitionTime":"2026-03-20T15:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.894874 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.894934 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.894951 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.894976 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.894994 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:32Z","lastTransitionTime":"2026-03-20T15:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.997898 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.997949 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.997967 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.997992 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:32 crc kubenswrapper[4730]: I0320 15:40:32.998022 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:32Z","lastTransitionTime":"2026-03-20T15:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.101119 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.101179 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.101198 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.101222 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.101239 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:33Z","lastTransitionTime":"2026-03-20T15:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.134702 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.134767 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.134790 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.134818 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.134840 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:33Z","lastTransitionTime":"2026-03-20T15:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:33 crc kubenswrapper[4730]: E0320 15:40:33.151187 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.155967 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.156050 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.156077 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.156108 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.156131 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:33Z","lastTransitionTime":"2026-03-20T15:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:33 crc kubenswrapper[4730]: E0320 15:40:33.172043 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.176617 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.176671 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.176691 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.176714 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.176731 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:33Z","lastTransitionTime":"2026-03-20T15:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:33 crc kubenswrapper[4730]: E0320 15:40:33.194231 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.199316 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.199358 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.199372 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.199388 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.199400 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:33Z","lastTransitionTime":"2026-03-20T15:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:33 crc kubenswrapper[4730]: E0320 15:40:33.213387 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.218770 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.218825 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.218842 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.218867 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.218884 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:33Z","lastTransitionTime":"2026-03-20T15:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:33 crc kubenswrapper[4730]: E0320 15:40:33.234337 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:33 crc kubenswrapper[4730]: E0320 15:40:33.234527 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.236471 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.236549 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.236562 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.236580 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.236591 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:33Z","lastTransitionTime":"2026-03-20T15:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.339415 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.339481 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.339499 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.339524 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.339540 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:33Z","lastTransitionTime":"2026-03-20T15:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.442476 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.442537 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.442554 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.442577 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.442595 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:33Z","lastTransitionTime":"2026-03-20T15:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.545036 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.545119 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.545139 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.545169 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.545192 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:33Z","lastTransitionTime":"2026-03-20T15:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.647887 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.647923 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.647934 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.647951 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.647963 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:33Z","lastTransitionTime":"2026-03-20T15:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.750495 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.750545 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.750555 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.750574 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.750585 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:33Z","lastTransitionTime":"2026-03-20T15:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.854623 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.854864 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.854932 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.855001 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.855070 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:33Z","lastTransitionTime":"2026-03-20T15:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.957795 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.958032 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.958098 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.958170 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:33 crc kubenswrapper[4730]: I0320 15:40:33.958230 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:33Z","lastTransitionTime":"2026-03-20T15:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.060591 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.060644 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.060663 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.060691 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.060713 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:34Z","lastTransitionTime":"2026-03-20T15:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.163784 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.163857 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.163871 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.163897 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.163913 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:34Z","lastTransitionTime":"2026-03-20T15:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.266876 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.266922 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.266942 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.266963 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.266978 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:34Z","lastTransitionTime":"2026-03-20T15:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.368907 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.368971 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.368990 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.369024 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.369041 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:34Z","lastTransitionTime":"2026-03-20T15:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.471929 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.472001 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.472012 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.472038 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.472053 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:34Z","lastTransitionTime":"2026-03-20T15:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.532526 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.532580 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.532534 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:34 crc kubenswrapper[4730]: E0320 15:40:34.532662 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.532526 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:34 crc kubenswrapper[4730]: E0320 15:40:34.532811 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:40:34 crc kubenswrapper[4730]: E0320 15:40:34.532906 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:40:34 crc kubenswrapper[4730]: E0320 15:40:34.532995 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.574316 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.574347 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.574357 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.574378 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.574392 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:34Z","lastTransitionTime":"2026-03-20T15:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.588583 4730 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.678883 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.679056 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.679075 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.679100 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.679118 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:34Z","lastTransitionTime":"2026-03-20T15:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.781324 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.781675 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.781864 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.782051 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.782204 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:34Z","lastTransitionTime":"2026-03-20T15:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.885348 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.885420 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.885442 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.885468 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.885485 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:34Z","lastTransitionTime":"2026-03-20T15:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.988981 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.989049 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.989087 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.989120 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:34 crc kubenswrapper[4730]: I0320 15:40:34.989141 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:34Z","lastTransitionTime":"2026-03-20T15:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.092446 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.092492 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.092503 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.092521 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.092532 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:35Z","lastTransitionTime":"2026-03-20T15:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.195195 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.195238 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.195266 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.195282 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.195293 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:35Z","lastTransitionTime":"2026-03-20T15:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.298198 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.298283 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.298305 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.298346 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.298382 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:35Z","lastTransitionTime":"2026-03-20T15:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.401689 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.401718 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.401726 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.401739 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.401748 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:35Z","lastTransitionTime":"2026-03-20T15:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.504837 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.504918 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.504935 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.504960 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.504978 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:35Z","lastTransitionTime":"2026-03-20T15:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.607820 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.607856 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.607868 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.607885 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.607896 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:35Z","lastTransitionTime":"2026-03-20T15:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.678676 4730 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.711464 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.711515 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.711540 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.711566 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.711580 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:35Z","lastTransitionTime":"2026-03-20T15:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.814542 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.814642 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.814667 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.814756 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.814834 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:35Z","lastTransitionTime":"2026-03-20T15:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.917310 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.917342 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.917355 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.917372 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:35 crc kubenswrapper[4730]: I0320 15:40:35.917384 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:35Z","lastTransitionTime":"2026-03-20T15:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.019338 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.019393 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.019410 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.019436 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.019453 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:36Z","lastTransitionTime":"2026-03-20T15:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.121958 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.122011 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.122032 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.122066 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.122100 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:36Z","lastTransitionTime":"2026-03-20T15:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.224402 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.224454 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.224468 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.224487 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.224500 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:36Z","lastTransitionTime":"2026-03-20T15:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.327385 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.327435 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.327448 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.327467 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.327479 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:36Z","lastTransitionTime":"2026-03-20T15:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.431085 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.431168 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.431190 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.431225 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.431276 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:36Z","lastTransitionTime":"2026-03-20T15:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.532172 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.532264 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.532880 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:36 crc kubenswrapper[4730]: E0320 15:40:36.532959 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.533013 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:36 crc kubenswrapper[4730]: E0320 15:40:36.533087 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:40:36 crc kubenswrapper[4730]: E0320 15:40:36.533153 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:40:36 crc kubenswrapper[4730]: E0320 15:40:36.533221 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.535452 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.535477 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.535487 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.535523 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.535534 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:36Z","lastTransitionTime":"2026-03-20T15:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.638466 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.638501 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.638510 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.638524 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.638534 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:36Z","lastTransitionTime":"2026-03-20T15:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.741204 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.741831 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.741847 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.741867 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.741880 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:36Z","lastTransitionTime":"2026-03-20T15:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.844227 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.844299 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.844311 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.844326 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.844350 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:36Z","lastTransitionTime":"2026-03-20T15:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.907106 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28"} Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.907164 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8"} Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.909150 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" event={"ID":"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9","Type":"ContainerStarted","Data":"458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85"} Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.916697 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.927320 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.935052 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.941327 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.946324 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.946350 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.946359 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.946371 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.946381 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:36Z","lastTransitionTime":"2026-03-20T15:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.948103 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.964777 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.974469 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.982175 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.990161 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:36 crc kubenswrapper[4730]: I0320 15:40:36.998146 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.007331 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.015463 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.022744 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.039261 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.048987 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.049017 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.049026 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.049039 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.049049 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:37Z","lastTransitionTime":"2026-03-20T15:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.049275 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.055305 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.061840 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.072481 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.082455 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.089211 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.096170 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.103203 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.108521 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.118521 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.124121 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.132372 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.139808 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.149760 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.151425 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.151450 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.151459 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.151472 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.151482 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:37Z","lastTransitionTime":"2026-03-20T15:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.158001 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.169113 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.178906 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.198537 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:37Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.253501 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.253535 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.253547 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.253563 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.253575 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:37Z","lastTransitionTime":"2026-03-20T15:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.355746 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.356317 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.356340 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.356356 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.356369 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:37Z","lastTransitionTime":"2026-03-20T15:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.458802 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.458829 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.458838 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.458850 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.458858 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:37Z","lastTransitionTime":"2026-03-20T15:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.560537 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.560578 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.560589 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.560605 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.560625 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:37Z","lastTransitionTime":"2026-03-20T15:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.663463 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.663506 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.663518 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.663537 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.663549 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:37Z","lastTransitionTime":"2026-03-20T15:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.766169 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.766201 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.766213 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.766227 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.766237 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:37Z","lastTransitionTime":"2026-03-20T15:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.870457 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.870508 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.870522 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.870564 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.870582 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:37Z","lastTransitionTime":"2026-03-20T15:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.913224 4730 generic.go:334] "Generic (PLEG): container finished" podID="dbb015c0-3a11-48bf-a59f-22bc03ca2fb9" containerID="458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85" exitCode=0 Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.913330 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" event={"ID":"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9","Type":"ContainerDied","Data":"458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85"} Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.930380 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:37Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.953578 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:37Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.968056 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:37Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.973576 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.973637 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.973655 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.973682 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.973753 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:37Z","lastTransitionTime":"2026-03-20T15:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.980137 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:37Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:37 crc kubenswrapper[4730]: I0320 15:40:37.991331 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:37Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.004212 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:38Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.013488 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:38Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.027234 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:38Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.042591 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:38Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.056585 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:38Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.066313 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:38Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.076630 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:38Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.078231 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.078280 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.078291 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.078305 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.078314 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:38Z","lastTransitionTime":"2026-03-20T15:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.086633 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:38Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.102097 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:38Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.111437 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:38Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.119560 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:38Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.180804 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.181082 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.181158 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.181230 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.181320 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:38Z","lastTransitionTime":"2026-03-20T15:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.284334 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.284370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.284382 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.284398 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.284410 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:38Z","lastTransitionTime":"2026-03-20T15:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.387454 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.387507 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.387532 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.387555 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.387571 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:38Z","lastTransitionTime":"2026-03-20T15:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.491085 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.491151 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.491168 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.491197 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.491215 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:38Z","lastTransitionTime":"2026-03-20T15:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.533089 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.533129 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.533161 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.533181 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:38 crc kubenswrapper[4730]: E0320 15:40:38.533387 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:40:38 crc kubenswrapper[4730]: E0320 15:40:38.533913 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:40:38 crc kubenswrapper[4730]: E0320 15:40:38.533993 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:40:38 crc kubenswrapper[4730]: E0320 15:40:38.534046 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.593436 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.593486 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.593524 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.593546 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.593562 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:38Z","lastTransitionTime":"2026-03-20T15:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.696707 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.696751 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.696764 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.696783 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.696796 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:38Z","lastTransitionTime":"2026-03-20T15:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.798958 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.799201 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.799313 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.799407 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.799490 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:38Z","lastTransitionTime":"2026-03-20T15:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.901792 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.902049 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.902143 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.902228 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.902339 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:38Z","lastTransitionTime":"2026-03-20T15:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.919624 4730 generic.go:334] "Generic (PLEG): container finished" podID="dbb015c0-3a11-48bf-a59f-22bc03ca2fb9" containerID="d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0" exitCode=0 Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.919673 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" event={"ID":"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9","Type":"ContainerDied","Data":"d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0"} Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.960705 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:38Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.976150 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:38Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:38 crc kubenswrapper[4730]: I0320 15:40:38.993093 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:38Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.006115 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.006381 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.006600 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.006845 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.007093 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:39Z","lastTransitionTime":"2026-03-20T15:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.007830 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:39Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.031276 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:39Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.058567 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:39Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.080048 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:39Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.099238 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:39Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.111204 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.111425 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.111769 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.112006 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.112180 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:39Z","lastTransitionTime":"2026-03-20T15:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.113058 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:39Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.126079 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:39Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.136138 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:39Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.145971 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:39Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.159052 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:39Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.167641 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:39Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.177145 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:39Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.188052 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:39Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.217929 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.217975 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.217985 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.218019 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.218031 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:39Z","lastTransitionTime":"2026-03-20T15:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.321096 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.321134 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.321144 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.321158 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.321169 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:39Z","lastTransitionTime":"2026-03-20T15:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.423695 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.423734 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.423744 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.423759 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.423770 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:39Z","lastTransitionTime":"2026-03-20T15:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.526400 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.526455 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.526465 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.526477 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.526487 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:39Z","lastTransitionTime":"2026-03-20T15:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.630341 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.630384 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.630396 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.630410 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.630423 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:39Z","lastTransitionTime":"2026-03-20T15:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.732718 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.732774 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.732791 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.732812 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.732827 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:39Z","lastTransitionTime":"2026-03-20T15:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.834783 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.834817 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.834827 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.834841 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.834852 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:39Z","lastTransitionTime":"2026-03-20T15:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.926121 4730 generic.go:334] "Generic (PLEG): container finished" podID="dbb015c0-3a11-48bf-a59f-22bc03ca2fb9" containerID="bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34" exitCode=0 Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.926197 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" event={"ID":"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9","Type":"ContainerDied","Data":"bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34"} Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.931536 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" event={"ID":"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0","Type":"ContainerStarted","Data":"c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712"} Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.931598 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" event={"ID":"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0","Type":"ContainerStarted","Data":"1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b"} Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.938934 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.938992 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.939015 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.939043 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.939071 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:39Z","lastTransitionTime":"2026-03-20T15:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.950721 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:39Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.967309 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:39Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:39 crc kubenswrapper[4730]: I0320 15:40:39.996354 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:39Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.010828 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.022235 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.034656 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.041305 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.041345 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.041358 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.041375 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.041389 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:40Z","lastTransitionTime":"2026-03-20T15:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.050644 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.063174 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.073778 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.084901 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.094603 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.105798 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.118013 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.129220 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.139419 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.144942 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.144979 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.144990 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.145006 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.145017 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:40Z","lastTransitionTime":"2026-03-20T15:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.150994 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.161340 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.178279 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.190267 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.200702 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.211688 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.222853 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.234181 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.244135 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.247023 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.247047 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.247057 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.247071 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.247079 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:40Z","lastTransitionTime":"2026-03-20T15:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.255541 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.268812 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.281240 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.293404 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.304773 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.315518 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.325870 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.335000 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.349802 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.349837 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.349850 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.349866 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.349877 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:40Z","lastTransitionTime":"2026-03-20T15:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.452905 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.452951 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.452963 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.452980 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.452992 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:40Z","lastTransitionTime":"2026-03-20T15:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.532570 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.532760 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:40 crc kubenswrapper[4730]: E0320 15:40:40.532757 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:40:40 crc kubenswrapper[4730]: E0320 15:40:40.532954 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.533055 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.533344 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:40 crc kubenswrapper[4730]: E0320 15:40:40.533483 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:40:40 crc kubenswrapper[4730]: E0320 15:40:40.533495 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.549274 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.555535 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.555580 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.555594 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.555616 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.555628 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:40Z","lastTransitionTime":"2026-03-20T15:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.659312 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.659355 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.659366 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.659380 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.659389 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:40Z","lastTransitionTime":"2026-03-20T15:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.761314 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.761358 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.761369 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.761385 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.761398 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:40Z","lastTransitionTime":"2026-03-20T15:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.863914 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.863963 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.863976 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.863993 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.864004 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:40Z","lastTransitionTime":"2026-03-20T15:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.935065 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6r2kn" event={"ID":"6f97b1f1-1fad-44ec-8253-17dd6a5eee54","Type":"ContainerStarted","Data":"f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6"} Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.936159 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-n4w74" event={"ID":"2ee8d55f-90bd-4484-8455-933de455efea","Type":"ContainerStarted","Data":"c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651"} Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.938749 4730 generic.go:334] "Generic (PLEG): container finished" podID="dbb015c0-3a11-48bf-a59f-22bc03ca2fb9" containerID="535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134" exitCode=0 Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.938804 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" event={"ID":"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9","Type":"ContainerDied","Data":"535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134"} Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.955485 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.966329 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.966357 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.966366 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.966378 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.966387 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:40Z","lastTransitionTime":"2026-03-20T15:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.970006 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:40 crc kubenswrapper[4730]: I0320 15:40:40.988118 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:40Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.007960 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.018181 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.036036 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.051115 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.066509 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.069405 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.069443 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.069455 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.069472 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.069484 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:41Z","lastTransitionTime":"2026-03-20T15:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.080086 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.090605 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.101365 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.125101 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.135983 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.145661 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.158335 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.168562 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.172000 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.172046 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.172057 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.172075 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.172087 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:41Z","lastTransitionTime":"2026-03-20T15:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.178813 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.191660 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.202747 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.218947 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.232781 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.244071 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.253718 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.264287 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.273805 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.273840 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.273848 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.273861 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.273870 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:41Z","lastTransitionTime":"2026-03-20T15:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.278122 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.294561 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.308089 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.317629 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.328466 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.340449 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.350718 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.359983 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.370492 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.375753 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.375798 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.375806 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.375820 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.375830 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:41Z","lastTransitionTime":"2026-03-20T15:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.381882 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.479373 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.479408 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.479419 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.479436 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.479447 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:41Z","lastTransitionTime":"2026-03-20T15:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.533219 4730 scope.go:117] "RemoveContainer" containerID="688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.548085 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.567635 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.582232 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.584523 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.584717 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.584746 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.584820 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.584848 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:41Z","lastTransitionTime":"2026-03-20T15:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.599892 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.612660 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.631994 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.648312 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.670324 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.685810 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.689665 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.689712 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.689732 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.689759 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.689779 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:41Z","lastTransitionTime":"2026-03-20T15:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.696385 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.718772 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.731966 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.743338 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.754938 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.765902 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.776089 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.788756 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.796267 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.796310 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.796320 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.796334 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.796344 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:41Z","lastTransitionTime":"2026-03-20T15:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.898446 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.898483 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.898491 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.898503 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.898512 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:41Z","lastTransitionTime":"2026-03-20T15:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.943373 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.944837 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad"} Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.945285 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.946159 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-69fnw" event={"ID":"102cb977-7291-453e-9282-20572071afee","Type":"ContainerStarted","Data":"35fa36f52355ad1fd24bd332fc5b0a5ff35ac2be71dd16b53449d9f47e48e8ad"} Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.947902 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"055b0251e049abda93d63985b27520b79b2596443df06427dd990fc974b52a07"} Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.954073 4730 generic.go:334] "Generic (PLEG): container finished" podID="dbb015c0-3a11-48bf-a59f-22bc03ca2fb9" containerID="5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a" exitCode=0 Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.954137 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" event={"ID":"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9","Type":"ContainerDied","Data":"5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a"} Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.956275 4730 generic.go:334] "Generic (PLEG): container finished" podID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerID="b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e" exitCode=0 Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.956298 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" event={"ID":"c4b4e0e8-af33-491e-b1d1-31079d90c656","Type":"ContainerDied","Data":"b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e"} Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.961919 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.986065 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:41 crc kubenswrapper[4730]: I0320 15:40:41.998903 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.000319 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.000349 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.000357 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.000369 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.000377 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:42Z","lastTransitionTime":"2026-03-20T15:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.008616 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.022317 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.039836 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.053099 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.066380 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.077370 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.088827 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.098295 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.108577 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.108623 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.108633 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.108646 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.108653 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:42Z","lastTransitionTime":"2026-03-20T15:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.139015 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.149548 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.178371 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.189551 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.200806 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.209332 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.213329 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.213397 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.213416 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.213441 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.213457 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:42Z","lastTransitionTime":"2026-03-20T15:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.220885 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://055b0251e049abda93d63985b27520b79b2596443df06427dd990fc974b52a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.232440 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.243504 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.252758 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.272372 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.289661 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.307008 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.315033 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.315066 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.315075 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.315089 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.315098 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:42Z","lastTransitionTime":"2026-03-20T15:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.322343 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.332988 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.354516 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.398136 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.417129 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.417189 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.417199 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.417218 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.417229 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:42Z","lastTransitionTime":"2026-03-20T15:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.437993 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.475053 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.515186 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35fa36f52355ad1fd24bd332fc5b0a5ff35ac2be71dd16b53449d9f47e48e8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.519826 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.519847 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.519854 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.519867 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.519876 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:42Z","lastTransitionTime":"2026-03-20T15:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.532420 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.532424 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.532433 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.532509 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:42 crc kubenswrapper[4730]: E0320 15:40:42.533481 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:40:42 crc kubenswrapper[4730]: E0320 15:40:42.534082 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:40:42 crc kubenswrapper[4730]: E0320 15:40:42.534157 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:40:42 crc kubenswrapper[4730]: E0320 15:40:42.534213 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.557737 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.602410 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.622740 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.622769 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.622780 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.622795 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.622804 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:42Z","lastTransitionTime":"2026-03-20T15:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.637888 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.724379 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.724414 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.724424 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.724439 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.724449 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:42Z","lastTransitionTime":"2026-03-20T15:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.826370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.826408 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.826421 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.826438 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.826450 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:42Z","lastTransitionTime":"2026-03-20T15:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.929151 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.929200 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.929211 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.929228 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.929239 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:42Z","lastTransitionTime":"2026-03-20T15:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.960470 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5f8a35dfa17acf7f7051d13b20cab55fe91645c8fa2773fed67baddae164b586"} Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.966482 4730 generic.go:334] "Generic (PLEG): container finished" podID="dbb015c0-3a11-48bf-a59f-22bc03ca2fb9" containerID="fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd" exitCode=0 Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.966569 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" event={"ID":"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9","Type":"ContainerDied","Data":"fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd"} Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.974309 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" event={"ID":"c4b4e0e8-af33-491e-b1d1-31079d90c656","Type":"ContainerStarted","Data":"d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01"} Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.974483 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" event={"ID":"c4b4e0e8-af33-491e-b1d1-31079d90c656","Type":"ContainerStarted","Data":"462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c"} Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.974565 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" event={"ID":"c4b4e0e8-af33-491e-b1d1-31079d90c656","Type":"ContainerStarted","Data":"006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d"} Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.974652 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" event={"ID":"c4b4e0e8-af33-491e-b1d1-31079d90c656","Type":"ContainerStarted","Data":"e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91"} Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.974725 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" event={"ID":"c4b4e0e8-af33-491e-b1d1-31079d90c656","Type":"ContainerStarted","Data":"b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db"} Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.974809 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" event={"ID":"c4b4e0e8-af33-491e-b1d1-31079d90c656","Type":"ContainerStarted","Data":"31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c"} Mar 20 15:40:42 crc kubenswrapper[4730]: I0320 15:40:42.995882 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:42Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.009167 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f8a35dfa17acf7f7051d13b20cab55fe91645c8fa2773fed67baddae164b586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.022757 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.032574 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.032615 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.032625 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.032641 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.032655 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:43Z","lastTransitionTime":"2026-03-20T15:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.042709 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.053397 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.064486 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.077355 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.090894 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35fa36f52355ad1fd24bd332fc5b0a5ff35ac2be71dd16b53449d9f47e48e8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.108451 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.128045 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.135368 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.135479 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.135537 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.135602 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.135683 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:43Z","lastTransitionTime":"2026-03-20T15:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.139986 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.148661 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.160636 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.198888 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.234621 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.238257 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.238276 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.238284 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.238296 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.238304 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:43Z","lastTransitionTime":"2026-03-20T15:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.275215 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.295718 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.295754 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.295767 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.295783 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.295795 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:43Z","lastTransitionTime":"2026-03-20T15:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:43 crc kubenswrapper[4730]: E0320 15:40:43.310710 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.314670 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.314718 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.314729 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.314748 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.314762 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:43Z","lastTransitionTime":"2026-03-20T15:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.317137 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://055b0251e049abda93d63985b27520b79b2596443df06427dd990fc974b52a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: E0320 15:40:43.333666 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.339643 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.339702 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.339718 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.339744 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.339758 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:43Z","lastTransitionTime":"2026-03-20T15:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:43 crc kubenswrapper[4730]: E0320 15:40:43.363612 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.368895 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.368966 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.368983 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.369002 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.369015 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:43Z","lastTransitionTime":"2026-03-20T15:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.378920 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: E0320 15:40:43.384868 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.389838 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.389873 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.389885 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.389902 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.389914 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:43Z","lastTransitionTime":"2026-03-20T15:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.402374 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35fa36f52355ad1fd24bd332fc5b0a5ff35ac2be71dd16b53449d9f47e48e8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: E0320 15:40:43.404456 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: E0320 15:40:43.404560 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.406176 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.406294 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.406370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.406444 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.406501 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:43Z","lastTransitionTime":"2026-03-20T15:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.439447 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.482089 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.509174 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.509400 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.509492 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.509613 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.509700 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:43Z","lastTransitionTime":"2026-03-20T15:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.518085 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.555642 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.598207 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.611390 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.611420 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.611430 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.611443 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.611453 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:43Z","lastTransitionTime":"2026-03-20T15:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.639282 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.675434 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.713071 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.713107 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.713117 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.713132 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.713143 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:43Z","lastTransitionTime":"2026-03-20T15:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.716352 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.756762 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://055b0251e049abda93d63985b27520b79b2596443df06427dd990fc974b52a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.796302 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.815420 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.815462 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.815471 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.815485 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.815493 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:43Z","lastTransitionTime":"2026-03-20T15:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.837053 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f8a35dfa17acf7f7051d13b20cab55fe91645c8fa2773fed67baddae164b586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.876890 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.918136 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.918204 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.918217 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.918236 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.918270 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:43Z","lastTransitionTime":"2026-03-20T15:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.929422 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.964299 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.978482 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerStarted","Data":"f8e20a286d6affcba7ffa950ef5386e7f439c9a02381cb8b7d3bc51ad9c4f343"} Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.978522 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerStarted","Data":"cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0"} Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.982440 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" event={"ID":"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9","Type":"ContainerStarted","Data":"9c34c742c6da6c6d35f815901234a0c12c9628d22fa83e511bddc78eae4373cc"} Mar 20 15:40:43 crc kubenswrapper[4730]: I0320 15:40:43.997136 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:43Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.020542 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.020582 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.020598 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.020616 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.020627 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:44Z","lastTransitionTime":"2026-03-20T15:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.038509 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f8a35dfa17acf7f7051d13b20cab55fe91645c8fa2773fed67baddae164b586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:44Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.078784 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:44Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.122854 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.122896 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.122908 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.122923 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.122934 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:44Z","lastTransitionTime":"2026-03-20T15:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.124911 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:44Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.162230 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:44Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.202477 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:44Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.225115 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.225148 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.225156 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.225169 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.225179 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:44Z","lastTransitionTime":"2026-03-20T15:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.242229 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:44Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.278845 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:44Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.322568 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:44Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.327527 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.327575 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.327587 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.327603 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.327908 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:44Z","lastTransitionTime":"2026-03-20T15:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.366409 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e20a286d6affcba7ffa950ef5386e7f439c9a02381cb8b7d3bc51ad9c4f343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:44Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.395167 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35fa36f52355ad1fd24bd332fc5b0a5ff35ac2be71dd16b53449d9f47e48e8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:44Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.436624 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.436668 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.436676 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.436690 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.436699 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:44Z","lastTransitionTime":"2026-03-20T15:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.441164 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34c742c6da6c6d35f815901234a0c12c9628d22fa83e511bddc78eae4373cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:44Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.462769 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.462848 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.462873 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.462891 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:44 crc kubenswrapper[4730]: E0320 15:40:44.462924 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:41:16.462904696 +0000 UTC m=+135.676276065 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.462953 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:44 crc kubenswrapper[4730]: E0320 15:40:44.462988 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 15:40:44 crc kubenswrapper[4730]: E0320 15:40:44.463004 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 15:40:44 crc kubenswrapper[4730]: E0320 15:40:44.463016 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:40:44 crc kubenswrapper[4730]: E0320 15:40:44.463039 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 15:40:44 crc kubenswrapper[4730]: E0320 15:40:44.463049 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 15:40:44 crc kubenswrapper[4730]: E0320 15:40:44.463137 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 15:40:44 crc kubenswrapper[4730]: E0320 15:40:44.463132 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 15:40:44 crc kubenswrapper[4730]: E0320 15:40:44.463152 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:40:44 crc kubenswrapper[4730]: E0320 15:40:44.463060 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 15:41:16.46304966 +0000 UTC m=+135.676421019 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:40:44 crc kubenswrapper[4730]: E0320 15:40:44.463290 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 15:41:16.463275616 +0000 UTC m=+135.676646985 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 15:40:44 crc kubenswrapper[4730]: E0320 15:40:44.463309 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 15:41:16.463298427 +0000 UTC m=+135.676669796 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 15:40:44 crc kubenswrapper[4730]: E0320 15:40:44.463325 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 15:41:16.463318348 +0000 UTC m=+135.676689717 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.491089 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:44Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.514131 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:44Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.532882 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:44 crc kubenswrapper[4730]: E0320 15:40:44.532994 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.533051 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:44 crc kubenswrapper[4730]: E0320 15:40:44.533186 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.533221 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:44 crc kubenswrapper[4730]: E0320 15:40:44.533288 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.533589 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:44 crc kubenswrapper[4730]: E0320 15:40:44.533780 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.538202 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.538230 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.538243 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.538278 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.538289 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:44Z","lastTransitionTime":"2026-03-20T15:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.555718 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://055b0251e049abda93d63985b27520b79b2596443df06427dd990fc974b52a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:44Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.595795 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:44Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.636410 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:44Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.641325 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.641366 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.641374 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.641389 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.641398 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:44Z","lastTransitionTime":"2026-03-20T15:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.664919 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs\") pod \"network-metrics-daemon-2prfn\" (UID: \"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\") " pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:44 crc kubenswrapper[4730]: E0320 15:40:44.665082 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 15:40:44 crc kubenswrapper[4730]: E0320 15:40:44.665168 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs podName:db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a nodeName:}" failed. No retries permitted until 2026-03-20 15:41:16.665148421 +0000 UTC m=+135.878519790 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs") pod "network-metrics-daemon-2prfn" (UID: "db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.674495 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:44Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.743154 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.743209 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.743221 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.743243 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.743300 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:44Z","lastTransitionTime":"2026-03-20T15:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.845143 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.845183 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.845194 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.845211 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.845226 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:44Z","lastTransitionTime":"2026-03-20T15:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.947782 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.947849 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.947867 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.947897 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:44 crc kubenswrapper[4730]: I0320 15:40:44.947925 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:44Z","lastTransitionTime":"2026-03-20T15:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.053756 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.054122 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.054139 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.054156 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.054173 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:45Z","lastTransitionTime":"2026-03-20T15:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.158362 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.158403 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.158416 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.158434 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.158446 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:45Z","lastTransitionTime":"2026-03-20T15:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.261458 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.261510 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.261522 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.261537 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.261546 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:45Z","lastTransitionTime":"2026-03-20T15:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.364781 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.364857 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.364875 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.364903 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.364923 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:45Z","lastTransitionTime":"2026-03-20T15:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.467613 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.467691 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.467708 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.467732 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.467749 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:45Z","lastTransitionTime":"2026-03-20T15:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.571087 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.571164 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.571183 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.571207 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.571225 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:45Z","lastTransitionTime":"2026-03-20T15:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.673889 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.673937 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.673954 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.673976 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.673995 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:45Z","lastTransitionTime":"2026-03-20T15:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.777367 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.777881 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.777915 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.777945 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.777966 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:45Z","lastTransitionTime":"2026-03-20T15:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.879400 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.879458 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.879471 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.879485 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.879495 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:45Z","lastTransitionTime":"2026-03-20T15:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.981604 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.981639 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.981648 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.981662 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.981672 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:45Z","lastTransitionTime":"2026-03-20T15:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:45 crc kubenswrapper[4730]: I0320 15:40:45.995631 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" event={"ID":"c4b4e0e8-af33-491e-b1d1-31079d90c656","Type":"ContainerStarted","Data":"43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f"} Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.084380 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.084423 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.084435 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.084451 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.084463 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:46Z","lastTransitionTime":"2026-03-20T15:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.187178 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.187239 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.187331 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.187359 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.187380 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:46Z","lastTransitionTime":"2026-03-20T15:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.289693 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.289734 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.289744 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.289760 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.289771 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:46Z","lastTransitionTime":"2026-03-20T15:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.393521 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.393620 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.393655 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.393685 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.393709 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:46Z","lastTransitionTime":"2026-03-20T15:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.497166 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.497219 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.497232 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.497270 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.497284 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:46Z","lastTransitionTime":"2026-03-20T15:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.533145 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.533183 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.533224 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.533302 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:46 crc kubenswrapper[4730]: E0320 15:40:46.533421 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:40:46 crc kubenswrapper[4730]: E0320 15:40:46.533656 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:40:46 crc kubenswrapper[4730]: E0320 15:40:46.533901 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:40:46 crc kubenswrapper[4730]: E0320 15:40:46.533987 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.600153 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.600197 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.600208 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.600223 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.600232 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:46Z","lastTransitionTime":"2026-03-20T15:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.702728 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.702787 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.702806 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.702829 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.702848 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:46Z","lastTransitionTime":"2026-03-20T15:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.805545 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.805586 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.805601 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.805620 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.805635 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:46Z","lastTransitionTime":"2026-03-20T15:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.908739 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.908791 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.908808 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.908831 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:46 crc kubenswrapper[4730]: I0320 15:40:46.908852 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:46Z","lastTransitionTime":"2026-03-20T15:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.011914 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.011984 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.012007 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.012035 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.012057 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:47Z","lastTransitionTime":"2026-03-20T15:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.114382 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.114434 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.114450 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.114471 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.114489 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:47Z","lastTransitionTime":"2026-03-20T15:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.217639 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.217908 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.217920 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.217934 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.217946 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:47Z","lastTransitionTime":"2026-03-20T15:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.319799 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.319839 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.319851 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.319867 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.319878 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:47Z","lastTransitionTime":"2026-03-20T15:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.422523 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.422558 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.422572 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.422589 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.422601 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:47Z","lastTransitionTime":"2026-03-20T15:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.524761 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.524808 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.524828 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.524850 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.524866 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:47Z","lastTransitionTime":"2026-03-20T15:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.627263 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.627332 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.627347 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.627367 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.627379 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:47Z","lastTransitionTime":"2026-03-20T15:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.730747 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.730819 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.730863 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.730901 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.730924 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:47Z","lastTransitionTime":"2026-03-20T15:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.834145 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.834197 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.834211 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.834229 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.834264 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:47Z","lastTransitionTime":"2026-03-20T15:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.936504 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.936546 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.936556 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.936572 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:47 crc kubenswrapper[4730]: I0320 15:40:47.936581 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:47Z","lastTransitionTime":"2026-03-20T15:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.014110 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" event={"ID":"c4b4e0e8-af33-491e-b1d1-31079d90c656","Type":"ContainerStarted","Data":"1c7fc7c3b6cf0ebabd03bf607d7da5f1221244499e74f0f13a94a8015113c518"} Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.014709 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.029517 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.040131 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.040181 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.040200 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.040223 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.040240 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:48Z","lastTransitionTime":"2026-03-20T15:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.043686 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.051585 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.058479 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.076492 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.091274 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e20a286d6affcba7ffa950ef5386e7f439c9a02381cb8b7d3bc51ad9c4f343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.103867 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35fa36f52355ad1fd24bd332fc5b0a5ff35ac2be71dd16b53449d9f47e48e8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.119119 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34c742c6da6c6d35f815901234a0c12c9628d22fa83e511bddc78eae4373cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.137664 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.142084 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.142114 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.142123 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.142138 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.142148 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:48Z","lastTransitionTime":"2026-03-20T15:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.156222 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.170452 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://055b0251e049abda93d63985b27520b79b2596443df06427dd990fc974b52a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.186477 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.208624 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.224197 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.236429 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.244914 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.244983 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.244997 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.245016 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.245029 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:48Z","lastTransitionTime":"2026-03-20T15:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.251144 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.269749 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c7fc7c3b6cf0ebabd03bf607d7da5f1221244499e74f0f13a94a8015113c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.287498 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f8a35dfa17acf7f7051d13b20cab55fe91645c8fa2773fed67baddae164b586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.300818 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.311109 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.331363 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.348826 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.351274 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.351323 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.351339 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.351363 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.351380 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:48Z","lastTransitionTime":"2026-03-20T15:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.391177 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.411538 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.423214 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e20a286d6affcba7ffa950ef5386e7f439c9a02381cb8b7d3bc51ad9c4f343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.432766 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35fa36f52355ad1fd24bd332fc5b0a5ff35ac2be71dd16b53449d9f47e48e8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.444424 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34c742c6da6c6d35f815901234a0c12c9628d22fa83e511bddc78eae4373cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.453814 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.453853 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.453861 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.453873 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.453883 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:48Z","lastTransitionTime":"2026-03-20T15:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.454291 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.464907 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://055b0251e049abda93d63985b27520b79b2596443df06427dd990fc974b52a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.477490 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.490286 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.501225 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.513565 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f8a35dfa17acf7f7051d13b20cab55fe91645c8fa2773fed67baddae164b586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.526055 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.532604 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.532647 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:48 crc kubenswrapper[4730]: E0320 15:40:48.532731 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:40:48 crc kubenswrapper[4730]: E0320 15:40:48.532869 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.533007 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:48 crc kubenswrapper[4730]: E0320 15:40:48.533095 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.533325 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:48 crc kubenswrapper[4730]: E0320 15:40:48.533429 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.546357 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c7fc7c3b6cf0ebabd03bf607d7da5f1221244499e74f0f13a94a8015113c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:48Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.555950 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.555999 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.556016 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.556042 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.556064 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:48Z","lastTransitionTime":"2026-03-20T15:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.658798 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.658866 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.658883 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.658956 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.658979 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:48Z","lastTransitionTime":"2026-03-20T15:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.762069 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.762147 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.762168 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.762196 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.762214 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:48Z","lastTransitionTime":"2026-03-20T15:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.865508 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.865991 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.866010 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.866038 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.866057 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:48Z","lastTransitionTime":"2026-03-20T15:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.969174 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.969219 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.969232 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.969269 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:48 crc kubenswrapper[4730]: I0320 15:40:48.969284 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:48Z","lastTransitionTime":"2026-03-20T15:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.017910 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.018566 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.041659 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.059139 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:49Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.070426 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:49Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.076208 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.076268 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.076282 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.076301 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.076656 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:49Z","lastTransitionTime":"2026-03-20T15:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.084513 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e20a286d6affcba7ffa950ef5386e7f439c9a02381cb8b7d3bc51ad9c4f343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:49Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.094986 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35fa36f52355ad1fd24bd332fc5b0a5ff35ac2be71dd16b53449d9f47e48e8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:49Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.112793 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34c742c6da6c6d35f815901234a0c12c9628d22fa83e511bddc78eae4373cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:49Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.135146 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:49Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.151947 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:49Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.165205 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:49Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.177727 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:49Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.179681 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.179710 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.179719 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.179737 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.179746 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:49Z","lastTransitionTime":"2026-03-20T15:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.189026 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:49Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.199873 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:49Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.214514 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:49Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.229997 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://055b0251e049abda93d63985b27520b79b2596443df06427dd990fc974b52a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:49Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.243962 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:49Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.261177 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f8a35dfa17acf7f7051d13b20cab55fe91645c8fa2773fed67baddae164b586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:49Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.279016 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:49Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.282778 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.282841 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.282858 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.282880 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.282895 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:49Z","lastTransitionTime":"2026-03-20T15:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.297181 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c7fc7c3b6cf0ebabd03bf607d7da5f1221244499e74f0f13a94a8015113c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:49Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.386322 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.386728 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.386854 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.386935 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.387018 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:49Z","lastTransitionTime":"2026-03-20T15:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.490869 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.491140 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.491231 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.491311 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.491376 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:49Z","lastTransitionTime":"2026-03-20T15:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.594373 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.594536 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.594631 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.594721 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.594811 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:49Z","lastTransitionTime":"2026-03-20T15:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.696993 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.697059 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.697074 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.697097 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.697142 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:49Z","lastTransitionTime":"2026-03-20T15:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.799593 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.799665 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.799678 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.799694 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.799717 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:49Z","lastTransitionTime":"2026-03-20T15:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.903158 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.903225 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.903245 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.903311 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:49 crc kubenswrapper[4730]: I0320 15:40:49.903331 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:49Z","lastTransitionTime":"2026-03-20T15:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.005616 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.005846 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.005925 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.006005 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.006128 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:50Z","lastTransitionTime":"2026-03-20T15:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.022157 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj97f_c4b4e0e8-af33-491e-b1d1-31079d90c656/ovnkube-controller/0.log" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.025232 4730 generic.go:334] "Generic (PLEG): container finished" podID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerID="1c7fc7c3b6cf0ebabd03bf607d7da5f1221244499e74f0f13a94a8015113c518" exitCode=1 Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.025273 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" event={"ID":"c4b4e0e8-af33-491e-b1d1-31079d90c656","Type":"ContainerDied","Data":"1c7fc7c3b6cf0ebabd03bf607d7da5f1221244499e74f0f13a94a8015113c518"} Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.025820 4730 scope.go:117] "RemoveContainer" containerID="1c7fc7c3b6cf0ebabd03bf607d7da5f1221244499e74f0f13a94a8015113c518" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.039234 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34c742c6da6c6d35f815901234a0c12c9628d22fa83e511bddc78eae4373cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:50Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.065168 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:50Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.076463 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:50Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.089228 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:50Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.101303 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:50Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.108285 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.108328 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.108342 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.108407 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.108434 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:50Z","lastTransitionTime":"2026-03-20T15:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.112518 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e20a286d6affcba7ffa950ef5386e7f439c9a02381cb8b7d3bc51ad9c4f343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:50Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.123302 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35fa36f52355ad1fd24bd332fc5b0a5ff35ac2be71dd16b53449d9f47e48e8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:50Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.133328 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:50Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.149197 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://055b0251e049abda93d63985b27520b79b2596443df06427dd990fc974b52a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:50Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.160629 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:50Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.174839 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:50Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.185509 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:50Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.196731 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f8a35dfa17acf7f7051d13b20cab55fe91645c8fa2773fed67baddae164b586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:50Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.207848 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:50Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.211655 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.211693 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.211705 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.211722 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.211733 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:50Z","lastTransitionTime":"2026-03-20T15:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.224483 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c7fc7c3b6cf0ebabd03bf607d7da5f1221244499e74f0f13a94a8015113c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c7fc7c3b6cf0ebabd03bf607d7da5f1221244499e74f0f13a94a8015113c518\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:40:49Z\\\",\\\"message\\\":\\\"ice/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 15:40:49.565874 6588 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0320 15:40:49.565925 6588 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0320 15:40:49.565931 6588 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0320 15:40:49.565977 6588 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0320 15:40:49.565987 6588 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0320 15:40:49.565980 6588 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0320 15:40:49.566007 6588 handler.go:208] Removed *v1.Node event handler 2\\\\nI0320 15:40:49.566013 6588 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0320 15:40:49.566012 6588 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0320 15:40:49.566033 6588 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0320 15:40:49.566034 6588 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0320 15:40:49.566047 6588 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0320 15:40:49.566043 6588 handler.go:208] Removed *v1.Node event handler 7\\\\nI0320 15:40:49.566092 6588 factory.go:656] Stopping watch factory\\\\nI0320 15:40:49.566129 6588 ovnkube.go:599] Stopped ovnkube\\\\nI0320 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:50Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.236425 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:50Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.244622 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:50Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.313950 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.314000 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.314017 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.314038 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.314055 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:50Z","lastTransitionTime":"2026-03-20T15:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.417932 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.417973 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.417983 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.417998 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.418007 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:50Z","lastTransitionTime":"2026-03-20T15:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.520402 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.520445 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.520459 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.520476 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.520489 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:50Z","lastTransitionTime":"2026-03-20T15:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.532746 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.532774 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.532838 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.532913 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:50 crc kubenswrapper[4730]: E0320 15:40:50.532908 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:40:50 crc kubenswrapper[4730]: E0320 15:40:50.533050 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:40:50 crc kubenswrapper[4730]: E0320 15:40:50.533121 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:40:50 crc kubenswrapper[4730]: E0320 15:40:50.533186 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.623114 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.623156 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.623164 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.623179 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.623189 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:50Z","lastTransitionTime":"2026-03-20T15:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.725789 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.725823 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.725831 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.725844 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.725853 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:50Z","lastTransitionTime":"2026-03-20T15:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.827869 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.827912 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.827921 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.827936 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.827950 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:50Z","lastTransitionTime":"2026-03-20T15:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.929769 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.929803 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.929811 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.929824 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:50 crc kubenswrapper[4730]: I0320 15:40:50.929832 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:50Z","lastTransitionTime":"2026-03-20T15:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.028704 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj97f_c4b4e0e8-af33-491e-b1d1-31079d90c656/ovnkube-controller/1.log" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.029408 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj97f_c4b4e0e8-af33-491e-b1d1-31079d90c656/ovnkube-controller/0.log" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.032414 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.032448 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.032460 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.032493 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.032503 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:51Z","lastTransitionTime":"2026-03-20T15:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.033127 4730 generic.go:334] "Generic (PLEG): container finished" podID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerID="7695af707a9a2e3b80ba56e6a4b97e35b5df9886333ea9cbb7f3852243b61ad7" exitCode=1 Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.033159 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" event={"ID":"c4b4e0e8-af33-491e-b1d1-31079d90c656","Type":"ContainerDied","Data":"7695af707a9a2e3b80ba56e6a4b97e35b5df9886333ea9cbb7f3852243b61ad7"} Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.033188 4730 scope.go:117] "RemoveContainer" containerID="1c7fc7c3b6cf0ebabd03bf607d7da5f1221244499e74f0f13a94a8015113c518" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.033774 4730 scope.go:117] "RemoveContainer" containerID="7695af707a9a2e3b80ba56e6a4b97e35b5df9886333ea9cbb7f3852243b61ad7" Mar 20 15:40:51 crc kubenswrapper[4730]: E0320 15:40:51.034011 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qj97f_openshift-ovn-kubernetes(c4b4e0e8-af33-491e-b1d1-31079d90c656)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.052281 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f8a35dfa17acf7f7051d13b20cab55fe91645c8fa2773fed67baddae164b586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.068854 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.087074 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7695af707a9a2e3b80ba56e6a4b97e35b5df9886333ea9cbb7f3852243b61ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c7fc7c3b6cf0ebabd03bf607d7da5f1221244499e74f0f13a94a8015113c518\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:40:49Z\\\",\\\"message\\\":\\\"ice/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 15:40:49.565874 6588 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0320 15:40:49.565925 6588 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0320 15:40:49.565931 6588 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0320 15:40:49.565977 6588 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0320 15:40:49.565987 6588 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0320 15:40:49.565980 6588 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0320 15:40:49.566007 6588 handler.go:208] Removed *v1.Node event handler 2\\\\nI0320 15:40:49.566013 6588 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0320 15:40:49.566012 6588 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0320 15:40:49.566033 6588 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0320 15:40:49.566034 6588 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0320 15:40:49.566047 6588 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0320 15:40:49.566043 6588 handler.go:208] Removed *v1.Node event handler 7\\\\nI0320 15:40:49.566092 6588 factory.go:656] Stopping watch factory\\\\nI0320 15:40:49.566129 6588 ovnkube.go:599] Stopped ovnkube\\\\nI0320 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7695af707a9a2e3b80ba56e6a4b97e35b5df9886333ea9cbb7f3852243b61ad7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:40:50Z\\\",\\\"message\\\":\\\"0.980093 6718 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0320 15:40:50.980099 6718 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0320 15:40:50.980105 6718 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0320 15:40:50.980374 6718 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 15:40:50.980766 6718 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 15:40:50.981084 6718 handler.go:208] Removed *v1.Node event handler 2\\\\nI0320 15:40:50.981851 6718 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 15:40:50.982080 6718 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 15:40:50.982550 6718 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0320 15:40:50.982574 6718 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0320 15:40:50.982620 6718 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0320 15:40:50.982656 6718 factory.go:656] Stopping watch factory\\\\nI0320 15:40:50.982676 6718 ovnkube.go:599] Stopped ovnkube\\\\nI0320 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.102700 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.111306 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.128925 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34c742c6da6c6d35f815901234a0c12c9628d22fa83e511bddc78eae4373cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.134674 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.134722 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.134736 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.134753 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.134783 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:51Z","lastTransitionTime":"2026-03-20T15:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.148039 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.163878 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.174164 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.189969 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.203008 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e20a286d6affcba7ffa950ef5386e7f439c9a02381cb8b7d3bc51ad9c4f343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.216718 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35fa36f52355ad1fd24bd332fc5b0a5ff35ac2be71dd16b53449d9f47e48e8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.228117 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.238059 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.238135 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.238159 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.238190 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.238216 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:51Z","lastTransitionTime":"2026-03-20T15:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.240850 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://055b0251e049abda93d63985b27520b79b2596443df06427dd990fc974b52a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.256382 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.269923 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.282462 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.340814 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.340850 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.340860 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.340874 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.340884 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:51Z","lastTransitionTime":"2026-03-20T15:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.443097 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.443141 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.443153 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.443169 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.443182 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:51Z","lastTransitionTime":"2026-03-20T15:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.548746 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.549044 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.549164 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.549303 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.549403 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:51Z","lastTransitionTime":"2026-03-20T15:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.555273 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.571949 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.594165 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.604983 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e20a286d6affcba7ffa950ef5386e7f439c9a02381cb8b7d3bc51ad9c4f343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.614451 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35fa36f52355ad1fd24bd332fc5b0a5ff35ac2be71dd16b53449d9f47e48e8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.629026 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34c742c6da6c6d35f815901234a0c12c9628d22fa83e511bddc78eae4373cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.647805 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.651681 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.651710 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.651720 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.651735 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.651746 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:51Z","lastTransitionTime":"2026-03-20T15:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.660804 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.670933 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://055b0251e049abda93d63985b27520b79b2596443df06427dd990fc974b52a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.682746 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.695120 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.706976 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.719543 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f8a35dfa17acf7f7051d13b20cab55fe91645c8fa2773fed67baddae164b586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.736855 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.754283 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.754342 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.754359 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.754379 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.754393 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:51Z","lastTransitionTime":"2026-03-20T15:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.767307 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7695af707a9a2e3b80ba56e6a4b97e35b5df9886333ea9cbb7f3852243b61ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c7fc7c3b6cf0ebabd03bf607d7da5f1221244499e74f0f13a94a8015113c518\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:40:49Z\\\",\\\"message\\\":\\\"ice/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 15:40:49.565874 6588 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0320 15:40:49.565925 6588 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0320 15:40:49.565931 6588 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0320 15:40:49.565977 6588 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0320 15:40:49.565987 6588 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0320 15:40:49.565980 6588 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0320 15:40:49.566007 6588 handler.go:208] Removed *v1.Node event handler 2\\\\nI0320 15:40:49.566013 6588 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0320 15:40:49.566012 6588 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0320 15:40:49.566033 6588 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0320 15:40:49.566034 6588 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0320 15:40:49.566047 6588 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0320 15:40:49.566043 6588 handler.go:208] Removed *v1.Node event handler 7\\\\nI0320 15:40:49.566092 6588 factory.go:656] Stopping watch factory\\\\nI0320 15:40:49.566129 6588 ovnkube.go:599] Stopped ovnkube\\\\nI0320 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7695af707a9a2e3b80ba56e6a4b97e35b5df9886333ea9cbb7f3852243b61ad7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:40:50Z\\\",\\\"message\\\":\\\"0.980093 6718 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0320 15:40:50.980099 6718 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0320 15:40:50.980105 6718 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0320 15:40:50.980374 6718 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 15:40:50.980766 6718 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 15:40:50.981084 6718 handler.go:208] Removed *v1.Node event handler 2\\\\nI0320 15:40:50.981851 6718 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 15:40:50.982080 6718 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 15:40:50.982550 6718 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0320 15:40:50.982574 6718 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0320 15:40:50.982620 6718 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0320 15:40:50.982656 6718 factory.go:656] Stopping watch factory\\\\nI0320 15:40:50.982676 6718 ovnkube.go:599] Stopped ovnkube\\\\nI0320 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.786174 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.799957 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.856357 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.856413 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.856433 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.856455 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.856472 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:51Z","lastTransitionTime":"2026-03-20T15:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.958524 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.958559 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.958570 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.958584 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:51 crc kubenswrapper[4730]: I0320 15:40:51.958595 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:51Z","lastTransitionTime":"2026-03-20T15:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.038097 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj97f_c4b4e0e8-af33-491e-b1d1-31079d90c656/ovnkube-controller/1.log" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.041045 4730 scope.go:117] "RemoveContainer" containerID="7695af707a9a2e3b80ba56e6a4b97e35b5df9886333ea9cbb7f3852243b61ad7" Mar 20 15:40:52 crc kubenswrapper[4730]: E0320 15:40:52.041196 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qj97f_openshift-ovn-kubernetes(c4b4e0e8-af33-491e-b1d1-31079d90c656)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.052040 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:52Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.060666 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.060700 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.060711 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.060726 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.060739 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:52Z","lastTransitionTime":"2026-03-20T15:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.062272 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:52Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.075660 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34c742c6da6c6d35f815901234a0c12c9628d22fa83e511bddc78eae4373cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:52Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.093405 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:52Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.112319 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:52Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.123848 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:52Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.134484 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:52Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.147086 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e20a286d6affcba7ffa950ef5386e7f439c9a02381cb8b7d3bc51ad9c4f343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:52Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.155240 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35fa36f52355ad1fd24bd332fc5b0a5ff35ac2be71dd16b53449d9f47e48e8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:52Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.163656 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:52Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.165414 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.165467 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.165481 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.165522 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.165534 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:52Z","lastTransitionTime":"2026-03-20T15:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.173327 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://055b0251e049abda93d63985b27520b79b2596443df06427dd990fc974b52a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:52Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.183268 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:52Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.193513 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:52Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.202304 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:52Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.214753 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f8a35dfa17acf7f7051d13b20cab55fe91645c8fa2773fed67baddae164b586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:52Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.225994 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:52Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.242776 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7695af707a9a2e3b80ba56e6a4b97e35b5df9886333ea9cbb7f3852243b61ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7695af707a9a2e3b80ba56e6a4b97e35b5df9886333ea9cbb7f3852243b61ad7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:40:50Z\\\",\\\"message\\\":\\\"0.980093 6718 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0320 15:40:50.980099 6718 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0320 15:40:50.980105 6718 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0320 15:40:50.980374 6718 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 15:40:50.980766 6718 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 15:40:50.981084 6718 handler.go:208] Removed *v1.Node event handler 2\\\\nI0320 15:40:50.981851 6718 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 15:40:50.982080 6718 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 15:40:50.982550 6718 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0320 15:40:50.982574 6718 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0320 15:40:50.982620 6718 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0320 15:40:50.982656 6718 factory.go:656] Stopping watch factory\\\\nI0320 15:40:50.982676 6718 ovnkube.go:599] Stopped ovnkube\\\\nI0320 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qj97f_openshift-ovn-kubernetes(c4b4e0e8-af33-491e-b1d1-31079d90c656)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:52Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.267770 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.267804 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.267835 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.267852 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.267864 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:52Z","lastTransitionTime":"2026-03-20T15:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.370728 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.370802 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.370814 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.370884 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.370895 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:52Z","lastTransitionTime":"2026-03-20T15:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.475038 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.475139 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.475168 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.475201 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.475225 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:52Z","lastTransitionTime":"2026-03-20T15:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.532361 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.532386 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:52 crc kubenswrapper[4730]: E0320 15:40:52.532527 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.532364 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.532596 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:52 crc kubenswrapper[4730]: E0320 15:40:52.532760 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:40:52 crc kubenswrapper[4730]: E0320 15:40:52.532956 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:40:52 crc kubenswrapper[4730]: E0320 15:40:52.533073 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.578504 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.578673 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.578701 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.578733 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.578757 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:52Z","lastTransitionTime":"2026-03-20T15:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.681971 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.682037 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.682063 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.682096 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.682119 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:52Z","lastTransitionTime":"2026-03-20T15:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.785161 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.785223 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.785234 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.785287 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.785303 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:52Z","lastTransitionTime":"2026-03-20T15:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.872232 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.888124 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.888204 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.888231 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.888326 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.888356 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:52Z","lastTransitionTime":"2026-03-20T15:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.893225 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f8a35dfa17acf7f7051d13b20cab55fe91645c8fa2773fed67baddae164b586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:52Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.909851 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:52Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.941728 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7695af707a9a2e3b80ba56e6a4b97e35b5df9886333ea9cbb7f3852243b61ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7695af707a9a2e3b80ba56e6a4b97e35b5df9886333ea9cbb7f3852243b61ad7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:40:50Z\\\",\\\"message\\\":\\\"0.980093 6718 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0320 15:40:50.980099 6718 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0320 15:40:50.980105 6718 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0320 15:40:50.980374 6718 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 15:40:50.980766 6718 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 15:40:50.981084 6718 handler.go:208] Removed *v1.Node event handler 2\\\\nI0320 15:40:50.981851 6718 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 15:40:50.982080 6718 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 15:40:50.982550 6718 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0320 15:40:50.982574 6718 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0320 15:40:50.982620 6718 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0320 15:40:50.982656 6718 factory.go:656] Stopping watch factory\\\\nI0320 15:40:50.982676 6718 ovnkube.go:599] Stopped ovnkube\\\\nI0320 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qj97f_openshift-ovn-kubernetes(c4b4e0e8-af33-491e-b1d1-31079d90c656)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:52Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.959430 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:52Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.972263 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:52Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.987772 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34c742c6da6c6d35f815901234a0c12c9628d22fa83e511bddc78eae4373cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:52Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.991391 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.991453 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.991471 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.991496 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:52 crc kubenswrapper[4730]: I0320 15:40:52.991513 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:52Z","lastTransitionTime":"2026-03-20T15:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.010276 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:53Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.029995 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:53Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.044016 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:53Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.055629 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:53Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.069929 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e20a286d6affcba7ffa950ef5386e7f439c9a02381cb8b7d3bc51ad9c4f343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:53Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.083350 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35fa36f52355ad1fd24bd332fc5b0a5ff35ac2be71dd16b53449d9f47e48e8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:53Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.094614 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.094656 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.094668 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.094686 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.094697 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:53Z","lastTransitionTime":"2026-03-20T15:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.096319 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:53Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.111566 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://055b0251e049abda93d63985b27520b79b2596443df06427dd990fc974b52a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:53Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.128277 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:53Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.149781 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:53Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.164067 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:53Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.197769 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.197823 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.197838 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.197857 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.197868 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:53Z","lastTransitionTime":"2026-03-20T15:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.300945 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.301006 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.301021 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.301044 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.301062 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:53Z","lastTransitionTime":"2026-03-20T15:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.403495 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.403578 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.403601 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.403629 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.403649 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:53Z","lastTransitionTime":"2026-03-20T15:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.507010 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.507097 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.507121 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.507150 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.507171 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:53Z","lastTransitionTime":"2026-03-20T15:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.610237 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.610346 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.610370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.610406 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.610427 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:53Z","lastTransitionTime":"2026-03-20T15:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.713195 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.713313 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.713332 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.713365 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.713390 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:53Z","lastTransitionTime":"2026-03-20T15:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.757771 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.757846 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.757871 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.757909 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.757935 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:53Z","lastTransitionTime":"2026-03-20T15:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:53 crc kubenswrapper[4730]: E0320 15:40:53.780319 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:53Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.785146 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.785185 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.785194 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.785210 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.785221 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:53Z","lastTransitionTime":"2026-03-20T15:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:53 crc kubenswrapper[4730]: E0320 15:40:53.802347 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:53Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.808488 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.808544 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.808565 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.808593 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.808611 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:53Z","lastTransitionTime":"2026-03-20T15:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:53 crc kubenswrapper[4730]: E0320 15:40:53.829184 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:53Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.835934 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.835991 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.836009 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.836035 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.836057 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:53Z","lastTransitionTime":"2026-03-20T15:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:53 crc kubenswrapper[4730]: E0320 15:40:53.857205 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:53Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.862772 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.862825 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.862843 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.862870 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.862890 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:53Z","lastTransitionTime":"2026-03-20T15:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:53 crc kubenswrapper[4730]: E0320 15:40:53.879841 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:40:53Z is after 2025-08-24T17:21:41Z" Mar 20 15:40:53 crc kubenswrapper[4730]: E0320 15:40:53.879965 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.881733 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.881775 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.881790 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.881809 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.881822 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:53Z","lastTransitionTime":"2026-03-20T15:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.986218 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.986333 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.986353 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.986379 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:53 crc kubenswrapper[4730]: I0320 15:40:53.986400 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:53Z","lastTransitionTime":"2026-03-20T15:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.089572 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.089634 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.089653 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.089681 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.089705 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:54Z","lastTransitionTime":"2026-03-20T15:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.191944 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.191999 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.192010 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.192029 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.192040 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:54Z","lastTransitionTime":"2026-03-20T15:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.294113 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.294168 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.294183 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.294202 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.294219 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:54Z","lastTransitionTime":"2026-03-20T15:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.397372 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.397418 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.397465 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.397486 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.397499 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:54Z","lastTransitionTime":"2026-03-20T15:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.499886 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.499922 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.499931 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.499945 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.499953 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:54Z","lastTransitionTime":"2026-03-20T15:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.532386 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:54 crc kubenswrapper[4730]: E0320 15:40:54.532498 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.532670 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.532691 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:54 crc kubenswrapper[4730]: E0320 15:40:54.532740 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:40:54 crc kubenswrapper[4730]: E0320 15:40:54.532903 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.533057 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:54 crc kubenswrapper[4730]: E0320 15:40:54.533276 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.603416 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.603496 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.603512 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.603533 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.603548 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:54Z","lastTransitionTime":"2026-03-20T15:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.707635 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.707727 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.707747 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.707770 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.707788 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:54Z","lastTransitionTime":"2026-03-20T15:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.811637 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.811703 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.811721 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.811746 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.811764 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:54Z","lastTransitionTime":"2026-03-20T15:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.915228 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.915311 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.915328 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.915352 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:54 crc kubenswrapper[4730]: I0320 15:40:54.915370 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:54Z","lastTransitionTime":"2026-03-20T15:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.018994 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.019058 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.019095 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.019130 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.019153 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:55Z","lastTransitionTime":"2026-03-20T15:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.123183 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.123291 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.123318 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.123349 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.123370 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:55Z","lastTransitionTime":"2026-03-20T15:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.226607 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.226664 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.226682 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.226705 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.226723 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:55Z","lastTransitionTime":"2026-03-20T15:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.329053 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.329125 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.329144 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.329174 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.329196 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:55Z","lastTransitionTime":"2026-03-20T15:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.432748 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.432811 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.432829 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.432853 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.432871 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:55Z","lastTransitionTime":"2026-03-20T15:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.534960 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.535007 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.535027 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.535045 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.535058 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:55Z","lastTransitionTime":"2026-03-20T15:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.637591 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.637635 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.637646 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.637662 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.637675 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:55Z","lastTransitionTime":"2026-03-20T15:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.739811 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.739864 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.739879 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.739899 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.739913 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:55Z","lastTransitionTime":"2026-03-20T15:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.842483 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.842536 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.842549 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.842566 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.842579 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:55Z","lastTransitionTime":"2026-03-20T15:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.945091 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.945126 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.945135 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.945149 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:55 crc kubenswrapper[4730]: I0320 15:40:55.945158 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:55Z","lastTransitionTime":"2026-03-20T15:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.047035 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.047072 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.047089 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.047110 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.047121 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:56Z","lastTransitionTime":"2026-03-20T15:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.149899 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.149949 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.149961 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.149979 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.149988 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:56Z","lastTransitionTime":"2026-03-20T15:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.252639 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.252700 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.252717 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.252744 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.252764 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:56Z","lastTransitionTime":"2026-03-20T15:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.355467 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.355501 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.355509 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.355523 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.355532 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:56Z","lastTransitionTime":"2026-03-20T15:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.458688 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.458741 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.458759 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.458783 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.458799 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:56Z","lastTransitionTime":"2026-03-20T15:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.532831 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.532925 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:56 crc kubenswrapper[4730]: E0320 15:40:56.533067 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.533112 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.532873 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:56 crc kubenswrapper[4730]: E0320 15:40:56.533231 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:40:56 crc kubenswrapper[4730]: E0320 15:40:56.533388 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:40:56 crc kubenswrapper[4730]: E0320 15:40:56.533554 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.561381 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.561451 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.561470 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.561496 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.561516 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:56Z","lastTransitionTime":"2026-03-20T15:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.664781 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.664850 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.664868 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.664920 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.664941 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:56Z","lastTransitionTime":"2026-03-20T15:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.768165 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.768225 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.768267 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.768292 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.768305 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:56Z","lastTransitionTime":"2026-03-20T15:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.871331 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.871383 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.871398 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.871417 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.871429 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:56Z","lastTransitionTime":"2026-03-20T15:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.973956 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.974012 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.974029 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.974056 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:56 crc kubenswrapper[4730]: I0320 15:40:56.974076 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:56Z","lastTransitionTime":"2026-03-20T15:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.076284 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.076573 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.076643 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.076705 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.076766 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:57Z","lastTransitionTime":"2026-03-20T15:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.179982 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.180051 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.180074 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.180114 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.180134 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:57Z","lastTransitionTime":"2026-03-20T15:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.283661 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.283728 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.283747 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.283776 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.283795 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:57Z","lastTransitionTime":"2026-03-20T15:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.386795 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.386854 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.386873 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.386898 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.386916 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:57Z","lastTransitionTime":"2026-03-20T15:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.490145 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.490194 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.490206 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.490223 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.490235 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:57Z","lastTransitionTime":"2026-03-20T15:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.592398 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.592462 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.592480 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.592546 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.592569 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:57Z","lastTransitionTime":"2026-03-20T15:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.695595 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.696057 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.696070 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.696090 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.696103 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:57Z","lastTransitionTime":"2026-03-20T15:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.798977 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.799103 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.799131 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.799160 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.799182 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:57Z","lastTransitionTime":"2026-03-20T15:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.902239 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.902362 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.902384 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.902410 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:57 crc kubenswrapper[4730]: I0320 15:40:57.902428 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:57Z","lastTransitionTime":"2026-03-20T15:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.005690 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.005765 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.005785 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.005811 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.005832 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:58Z","lastTransitionTime":"2026-03-20T15:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.109022 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.109148 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.109183 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.109216 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.109240 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:58Z","lastTransitionTime":"2026-03-20T15:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.211912 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.211977 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.211999 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.212031 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.212052 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:58Z","lastTransitionTime":"2026-03-20T15:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.314875 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.314940 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.314957 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.314982 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.315016 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:58Z","lastTransitionTime":"2026-03-20T15:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.418711 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.419030 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.419140 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.419237 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.419374 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:58Z","lastTransitionTime":"2026-03-20T15:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.522946 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.522991 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.523007 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.523030 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.523050 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:58Z","lastTransitionTime":"2026-03-20T15:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.532773 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.532785 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.532842 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.532886 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:40:58 crc kubenswrapper[4730]: E0320 15:40:58.533621 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:40:58 crc kubenswrapper[4730]: E0320 15:40:58.533744 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:40:58 crc kubenswrapper[4730]: E0320 15:40:58.533911 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:40:58 crc kubenswrapper[4730]: E0320 15:40:58.533827 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.625393 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.625663 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.625745 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.625819 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.625885 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:58Z","lastTransitionTime":"2026-03-20T15:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.728713 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.728759 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.728773 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.728791 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.728802 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:58Z","lastTransitionTime":"2026-03-20T15:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.831914 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.831994 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.832015 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.832040 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.832058 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:58Z","lastTransitionTime":"2026-03-20T15:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.934577 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.934625 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.934634 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.934646 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:58 crc kubenswrapper[4730]: I0320 15:40:58.934655 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:58Z","lastTransitionTime":"2026-03-20T15:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.037671 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.038071 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.038214 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.038410 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.038552 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:59Z","lastTransitionTime":"2026-03-20T15:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.141689 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.141731 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.141742 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.141757 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.141768 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:59Z","lastTransitionTime":"2026-03-20T15:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.244431 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.244506 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.244532 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.244558 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.244575 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:59Z","lastTransitionTime":"2026-03-20T15:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.347908 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.347959 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.347978 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.348001 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.348018 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:59Z","lastTransitionTime":"2026-03-20T15:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.451910 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.451982 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.452003 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.452029 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.452050 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:59Z","lastTransitionTime":"2026-03-20T15:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.554102 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.554145 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.554157 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.554175 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.554187 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:59Z","lastTransitionTime":"2026-03-20T15:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.657752 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.657814 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.657827 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.657847 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.657859 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:59Z","lastTransitionTime":"2026-03-20T15:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.761558 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.761644 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.761671 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.761711 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.761742 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:59Z","lastTransitionTime":"2026-03-20T15:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.865672 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.866134 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.866321 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.866496 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.866668 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:59Z","lastTransitionTime":"2026-03-20T15:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.970795 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.970879 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.970903 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.970945 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:40:59 crc kubenswrapper[4730]: I0320 15:40:59.970967 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:40:59Z","lastTransitionTime":"2026-03-20T15:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.074498 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.074567 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.074594 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.074626 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.074652 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:00Z","lastTransitionTime":"2026-03-20T15:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.177999 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.178050 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.178065 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.178086 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.178101 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:00Z","lastTransitionTime":"2026-03-20T15:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.280973 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.281046 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.281061 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.281084 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.281103 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:00Z","lastTransitionTime":"2026-03-20T15:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.383478 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.383780 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.383854 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.383937 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.384010 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:00Z","lastTransitionTime":"2026-03-20T15:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.487812 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.487875 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.487888 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.487908 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.487923 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:00Z","lastTransitionTime":"2026-03-20T15:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.533111 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.533202 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.533290 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:00 crc kubenswrapper[4730]: E0320 15:41:00.533496 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:00 crc kubenswrapper[4730]: E0320 15:41:00.533733 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.533920 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:00 crc kubenswrapper[4730]: E0320 15:41:00.534004 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:00 crc kubenswrapper[4730]: E0320 15:41:00.534390 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.591511 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.591571 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.591588 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.591613 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.591635 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:00Z","lastTransitionTime":"2026-03-20T15:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.694418 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.694468 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.694512 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.694535 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.694551 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:00Z","lastTransitionTime":"2026-03-20T15:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.797543 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.797611 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.797634 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.797662 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.797686 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:00Z","lastTransitionTime":"2026-03-20T15:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.900492 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.900814 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.900961 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.901103 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:00 crc kubenswrapper[4730]: I0320 15:41:00.901307 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:00Z","lastTransitionTime":"2026-03-20T15:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.005413 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.005487 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.005504 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.005528 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.005548 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:01Z","lastTransitionTime":"2026-03-20T15:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.108058 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.108135 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.108158 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.108185 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.108206 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:01Z","lastTransitionTime":"2026-03-20T15:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.212383 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.212510 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.212539 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.212608 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.212634 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:01Z","lastTransitionTime":"2026-03-20T15:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.316483 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.316557 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.316585 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.316616 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.316639 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:01Z","lastTransitionTime":"2026-03-20T15:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.419854 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.419926 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.419949 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.419982 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.420005 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:01Z","lastTransitionTime":"2026-03-20T15:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:01 crc kubenswrapper[4730]: E0320 15:41:01.520897 4730 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.558428 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:01Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.574307 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:01Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.597413 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34c742c6da6c6d35f815901234a0c12c9628d22fa83e511bddc78eae4373cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:01Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:01 crc kubenswrapper[4730]: E0320 15:41:01.629226 4730 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.632547 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:01Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.657531 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:01Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.671540 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:01Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.688549 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:01Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.700810 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e20a286d6affcba7ffa950ef5386e7f439c9a02381cb8b7d3bc51ad9c4f343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:01Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.713212 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35fa36f52355ad1fd24bd332fc5b0a5ff35ac2be71dd16b53449d9f47e48e8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:01Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.725965 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:01Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.740463 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://055b0251e049abda93d63985b27520b79b2596443df06427dd990fc974b52a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:01Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.758828 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:01Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.772880 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:01Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.786739 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:01Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.806996 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f8a35dfa17acf7f7051d13b20cab55fe91645c8fa2773fed67baddae164b586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:01Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.824810 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:01Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:01 crc kubenswrapper[4730]: I0320 15:41:01.846648 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7695af707a9a2e3b80ba56e6a4b97e35b5df9886333ea9cbb7f3852243b61ad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7695af707a9a2e3b80ba56e6a4b97e35b5df9886333ea9cbb7f3852243b61ad7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:40:50Z\\\",\\\"message\\\":\\\"0.980093 6718 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0320 15:40:50.980099 6718 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0320 15:40:50.980105 6718 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0320 15:40:50.980374 6718 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 15:40:50.980766 6718 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 15:40:50.981084 6718 handler.go:208] Removed *v1.Node event handler 2\\\\nI0320 15:40:50.981851 6718 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 15:40:50.982080 6718 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 15:40:50.982550 6718 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0320 15:40:50.982574 6718 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0320 15:40:50.982620 6718 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0320 15:40:50.982656 6718 factory.go:656] Stopping watch factory\\\\nI0320 15:40:50.982676 6718 ovnkube.go:599] Stopped ovnkube\\\\nI0320 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qj97f_openshift-ovn-kubernetes(c4b4e0e8-af33-491e-b1d1-31079d90c656)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:01Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:02 crc kubenswrapper[4730]: I0320 15:41:02.532405 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:02 crc kubenswrapper[4730]: I0320 15:41:02.532488 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:02 crc kubenswrapper[4730]: I0320 15:41:02.532493 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:02 crc kubenswrapper[4730]: I0320 15:41:02.532447 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:02 crc kubenswrapper[4730]: E0320 15:41:02.533195 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:02 crc kubenswrapper[4730]: E0320 15:41:02.533535 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:02 crc kubenswrapper[4730]: E0320 15:41:02.533703 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:02 crc kubenswrapper[4730]: E0320 15:41:02.533847 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:03 crc kubenswrapper[4730]: I0320 15:41:03.534087 4730 scope.go:117] "RemoveContainer" containerID="7695af707a9a2e3b80ba56e6a4b97e35b5df9886333ea9cbb7f3852243b61ad7" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.088755 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.089039 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.089049 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.089061 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.089070 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:04Z","lastTransitionTime":"2026-03-20T15:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.090423 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj97f_c4b4e0e8-af33-491e-b1d1-31079d90c656/ovnkube-controller/1.log" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.093103 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" event={"ID":"c4b4e0e8-af33-491e-b1d1-31079d90c656","Type":"ContainerStarted","Data":"03c5491305bb4c7cdbc1498dbb8cc5fe445c665c58d977511f19cb13251f7ef0"} Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.093555 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:41:04 crc kubenswrapper[4730]: E0320 15:41:04.103155 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:04Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.106458 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:04Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.106928 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.106956 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.106965 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.106983 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.106994 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:04Z","lastTransitionTime":"2026-03-20T15:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:04 crc kubenswrapper[4730]: E0320 15:41:04.118596 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:04Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.119127 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:04Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.122978 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.123013 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.123028 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.123044 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.123056 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:04Z","lastTransitionTime":"2026-03-20T15:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:04 crc kubenswrapper[4730]: E0320 15:41:04.137824 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:04Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.140446 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:04Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.141650 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.141694 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.141712 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.141734 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.141751 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:04Z","lastTransitionTime":"2026-03-20T15:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:04 crc kubenswrapper[4730]: E0320 15:41:04.154045 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:04Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.160449 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:04Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.161763 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.161795 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.161805 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.161818 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.161826 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:04Z","lastTransitionTime":"2026-03-20T15:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.182012 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:04Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:04 crc kubenswrapper[4730]: E0320 15:41:04.183527 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:04Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:04 crc kubenswrapper[4730]: E0320 15:41:04.183693 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.196593 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:04Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.209661 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e20a286d6affcba7ffa950ef5386e7f439c9a02381cb8b7d3bc51ad9c4f343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:04Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.219600 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35fa36f52355ad1fd24bd332fc5b0a5ff35ac2be71dd16b53449d9f47e48e8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:04Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.236356 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34c742c6da6c6d35f815901234a0c12c9628d22fa83e511bddc78eae4373cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:04Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.248377 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:04Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.262663 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://055b0251e049abda93d63985b27520b79b2596443df06427dd990fc974b52a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:04Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.273293 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:04Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.288037 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:04Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.297206 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:04Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.309722 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f8a35dfa17acf7f7051d13b20cab55fe91645c8fa2773fed67baddae164b586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:04Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.321058 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:04Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.350184 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c5491305bb4c7cdbc1498dbb8cc5fe445c665c58d977511f19cb13251f7ef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7695af707a9a2e3b80ba56e6a4b97e35b5df9886333ea9cbb7f3852243b61ad7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:40:50Z\\\",\\\"message\\\":\\\"0.980093 6718 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0320 15:40:50.980099 6718 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0320 15:40:50.980105 6718 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0320 15:40:50.980374 6718 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 15:40:50.980766 6718 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 15:40:50.981084 6718 handler.go:208] Removed *v1.Node event handler 2\\\\nI0320 15:40:50.981851 6718 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 15:40:50.982080 6718 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 15:40:50.982550 6718 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0320 15:40:50.982574 6718 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0320 15:40:50.982620 6718 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0320 15:40:50.982656 6718 factory.go:656] Stopping watch factory\\\\nI0320 15:40:50.982676 6718 ovnkube.go:599] Stopped ovnkube\\\\nI0320 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:04Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.532474 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.532572 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.532598 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:04 crc kubenswrapper[4730]: I0320 15:41:04.532486 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:04 crc kubenswrapper[4730]: E0320 15:41:04.532885 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:04 crc kubenswrapper[4730]: E0320 15:41:04.532984 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:04 crc kubenswrapper[4730]: E0320 15:41:04.533071 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:04 crc kubenswrapper[4730]: E0320 15:41:04.533113 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:05 crc kubenswrapper[4730]: I0320 15:41:05.099087 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj97f_c4b4e0e8-af33-491e-b1d1-31079d90c656/ovnkube-controller/2.log" Mar 20 15:41:05 crc kubenswrapper[4730]: I0320 15:41:05.100003 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj97f_c4b4e0e8-af33-491e-b1d1-31079d90c656/ovnkube-controller/1.log" Mar 20 15:41:05 crc kubenswrapper[4730]: I0320 15:41:05.102955 4730 generic.go:334] "Generic (PLEG): container finished" podID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerID="03c5491305bb4c7cdbc1498dbb8cc5fe445c665c58d977511f19cb13251f7ef0" exitCode=1 Mar 20 15:41:05 crc kubenswrapper[4730]: I0320 15:41:05.103021 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" event={"ID":"c4b4e0e8-af33-491e-b1d1-31079d90c656","Type":"ContainerDied","Data":"03c5491305bb4c7cdbc1498dbb8cc5fe445c665c58d977511f19cb13251f7ef0"} Mar 20 15:41:05 crc kubenswrapper[4730]: I0320 15:41:05.103082 4730 scope.go:117] "RemoveContainer" containerID="7695af707a9a2e3b80ba56e6a4b97e35b5df9886333ea9cbb7f3852243b61ad7" Mar 20 15:41:05 crc kubenswrapper[4730]: I0320 15:41:05.104392 4730 scope.go:117] "RemoveContainer" containerID="03c5491305bb4c7cdbc1498dbb8cc5fe445c665c58d977511f19cb13251f7ef0" Mar 20 15:41:05 crc kubenswrapper[4730]: E0320 15:41:05.104697 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qj97f_openshift-ovn-kubernetes(c4b4e0e8-af33-491e-b1d1-31079d90c656)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" Mar 20 15:41:05 crc kubenswrapper[4730]: I0320 15:41:05.124655 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:05Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:05 crc kubenswrapper[4730]: I0320 15:41:05.137494 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:05Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:05 crc kubenswrapper[4730]: I0320 15:41:05.152419 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:05Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:05 crc kubenswrapper[4730]: I0320 15:41:05.168739 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:05Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:05 crc kubenswrapper[4730]: I0320 15:41:05.180804 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e20a286d6affcba7ffa950ef5386e7f439c9a02381cb8b7d3bc51ad9c4f343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:05Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:05 crc kubenswrapper[4730]: I0320 15:41:05.191547 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35fa36f52355ad1fd24bd332fc5b0a5ff35ac2be71dd16b53449d9f47e48e8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:05Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:05 crc kubenswrapper[4730]: I0320 15:41:05.212222 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34c742c6da6c6d35f815901234a0c12c9628d22fa83e511bddc78eae4373cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:05Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:05 crc kubenswrapper[4730]: I0320 15:41:05.240870 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:05Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:05 crc kubenswrapper[4730]: I0320 15:41:05.257920 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:05Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:05 crc kubenswrapper[4730]: I0320 15:41:05.272069 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://055b0251e049abda93d63985b27520b79b2596443df06427dd990fc974b52a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:05Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:05 crc kubenswrapper[4730]: I0320 15:41:05.283528 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:05Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:05 crc kubenswrapper[4730]: I0320 15:41:05.300046 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:05Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:05 crc kubenswrapper[4730]: I0320 15:41:05.312382 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:05Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:05 crc kubenswrapper[4730]: I0320 15:41:05.324280 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:05Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:05 crc kubenswrapper[4730]: I0320 15:41:05.337289 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:05Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:05 crc kubenswrapper[4730]: I0320 15:41:05.356796 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c5491305bb4c7cdbc1498dbb8cc5fe445c665c58d977511f19cb13251f7ef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7695af707a9a2e3b80ba56e6a4b97e35b5df9886333ea9cbb7f3852243b61ad7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:40:50Z\\\",\\\"message\\\":\\\"0.980093 6718 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0320 15:40:50.980099 6718 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0320 15:40:50.980105 6718 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0320 15:40:50.980374 6718 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 15:40:50.980766 6718 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 15:40:50.981084 6718 handler.go:208] Removed *v1.Node event handler 2\\\\nI0320 15:40:50.981851 6718 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 15:40:50.982080 6718 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 15:40:50.982550 6718 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0320 15:40:50.982574 6718 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0320 15:40:50.982620 6718 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0320 15:40:50.982656 6718 factory.go:656] Stopping watch factory\\\\nI0320 15:40:50.982676 6718 ovnkube.go:599] Stopped ovnkube\\\\nI0320 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03c5491305bb4c7cdbc1498dbb8cc5fe445c665c58d977511f19cb13251f7ef0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\" LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}\\\\nI0320 15:41:04.407421 6925 services_controller.go:360] Finished syncing service dns-default on namespace openshift-dns for network=default : 4.127302ms\\\\nI0320 15:41:04.407620 6925 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0320 15:41:04.407658 6925 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0320 15:41:04.407675 6925 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0320 15:41:04.407733 6925 factory.go:1336] Added *v1.Node event handler 7\\\\nI0320 15:41:04.407751 6925 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0320 15:41:04.408006 6925 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0320 15:41:04.408084 6925 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0320 15:41:04.408106 6925 ovnkube.go:599] Stopped ovnkube\\\\nI0320 15:41:04.408148 6925 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0320 15:41:04.408227 6925 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:05Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:05 crc kubenswrapper[4730]: I0320 15:41:05.368988 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f8a35dfa17acf7f7051d13b20cab55fe91645c8fa2773fed67baddae164b586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:05Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:06 crc kubenswrapper[4730]: I0320 15:41:06.108788 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj97f_c4b4e0e8-af33-491e-b1d1-31079d90c656/ovnkube-controller/2.log" Mar 20 15:41:06 crc kubenswrapper[4730]: I0320 15:41:06.112784 4730 scope.go:117] "RemoveContainer" containerID="03c5491305bb4c7cdbc1498dbb8cc5fe445c665c58d977511f19cb13251f7ef0" Mar 20 15:41:06 crc kubenswrapper[4730]: E0320 15:41:06.112951 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qj97f_openshift-ovn-kubernetes(c4b4e0e8-af33-491e-b1d1-31079d90c656)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" Mar 20 15:41:06 crc kubenswrapper[4730]: I0320 15:41:06.127318 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f8a35dfa17acf7f7051d13b20cab55fe91645c8fa2773fed67baddae164b586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:06Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:06 crc kubenswrapper[4730]: I0320 15:41:06.139730 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:06Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:06 crc kubenswrapper[4730]: I0320 15:41:06.164153 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c5491305bb4c7cdbc1498dbb8cc5fe445c665c58d977511f19cb13251f7ef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03c5491305bb4c7cdbc1498dbb8cc5fe445c665c58d977511f19cb13251f7ef0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\" LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}\\\\nI0320 15:41:04.407421 6925 services_controller.go:360] Finished syncing service dns-default on namespace openshift-dns for network=default : 4.127302ms\\\\nI0320 15:41:04.407620 6925 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0320 15:41:04.407658 6925 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0320 15:41:04.407675 6925 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0320 15:41:04.407733 6925 factory.go:1336] Added *v1.Node event handler 7\\\\nI0320 15:41:04.407751 6925 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0320 15:41:04.408006 6925 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0320 15:41:04.408084 6925 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0320 15:41:04.408106 6925 ovnkube.go:599] Stopped ovnkube\\\\nI0320 15:41:04.408148 6925 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0320 15:41:04.408227 6925 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:41:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qj97f_openshift-ovn-kubernetes(c4b4e0e8-af33-491e-b1d1-31079d90c656)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:06Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:06 crc kubenswrapper[4730]: I0320 15:41:06.180496 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:06Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:06 crc kubenswrapper[4730]: I0320 15:41:06.194744 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:06Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:06 crc kubenswrapper[4730]: I0320 15:41:06.217598 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:06Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:06 crc kubenswrapper[4730]: I0320 15:41:06.234374 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:06Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:06 crc kubenswrapper[4730]: I0320 15:41:06.248172 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:06Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:06 crc kubenswrapper[4730]: I0320 15:41:06.261361 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:06Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:06 crc kubenswrapper[4730]: I0320 15:41:06.273780 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e20a286d6affcba7ffa950ef5386e7f439c9a02381cb8b7d3bc51ad9c4f343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:06Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:06 crc kubenswrapper[4730]: I0320 15:41:06.285771 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35fa36f52355ad1fd24bd332fc5b0a5ff35ac2be71dd16b53449d9f47e48e8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:06Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:06 crc kubenswrapper[4730]: I0320 15:41:06.302926 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34c742c6da6c6d35f815901234a0c12c9628d22fa83e511bddc78eae4373cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:06Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:06 crc kubenswrapper[4730]: I0320 15:41:06.321100 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:06Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:06 crc kubenswrapper[4730]: I0320 15:41:06.337033 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://055b0251e049abda93d63985b27520b79b2596443df06427dd990fc974b52a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:06Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:06 crc kubenswrapper[4730]: I0320 15:41:06.355191 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:06Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:06 crc kubenswrapper[4730]: I0320 15:41:06.372620 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:06Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:06 crc kubenswrapper[4730]: I0320 15:41:06.390721 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:06Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:06 crc kubenswrapper[4730]: I0320 15:41:06.532698 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:06 crc kubenswrapper[4730]: I0320 15:41:06.532844 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:06 crc kubenswrapper[4730]: E0320 15:41:06.532942 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:06 crc kubenswrapper[4730]: I0320 15:41:06.532967 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:06 crc kubenswrapper[4730]: E0320 15:41:06.533112 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:06 crc kubenswrapper[4730]: E0320 15:41:06.533291 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:06 crc kubenswrapper[4730]: I0320 15:41:06.533465 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:06 crc kubenswrapper[4730]: E0320 15:41:06.533601 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:06 crc kubenswrapper[4730]: I0320 15:41:06.552541 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 20 15:41:06 crc kubenswrapper[4730]: E0320 15:41:06.630619 4730 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 15:41:08 crc kubenswrapper[4730]: I0320 15:41:08.532288 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:08 crc kubenswrapper[4730]: I0320 15:41:08.532384 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:08 crc kubenswrapper[4730]: E0320 15:41:08.532414 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:08 crc kubenswrapper[4730]: I0320 15:41:08.532572 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:08 crc kubenswrapper[4730]: E0320 15:41:08.532571 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:08 crc kubenswrapper[4730]: E0320 15:41:08.532617 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:08 crc kubenswrapper[4730]: I0320 15:41:08.532991 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:08 crc kubenswrapper[4730]: E0320 15:41:08.533358 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:10 crc kubenswrapper[4730]: I0320 15:41:10.533029 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:10 crc kubenswrapper[4730]: I0320 15:41:10.533080 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:10 crc kubenswrapper[4730]: E0320 15:41:10.534281 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:10 crc kubenswrapper[4730]: E0320 15:41:10.534481 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:10 crc kubenswrapper[4730]: I0320 15:41:10.533134 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:10 crc kubenswrapper[4730]: I0320 15:41:10.533230 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:10 crc kubenswrapper[4730]: E0320 15:41:10.534848 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:10 crc kubenswrapper[4730]: E0320 15:41:10.534945 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:11 crc kubenswrapper[4730]: I0320 15:41:11.558514 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:11Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:11 crc kubenswrapper[4730]: I0320 15:41:11.591219 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c5491305bb4c7cdbc1498dbb8cc5fe445c665c58d977511f19cb13251f7ef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03c5491305bb4c7cdbc1498dbb8cc5fe445c665c58d977511f19cb13251f7ef0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\" LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}\\\\nI0320 15:41:04.407421 6925 services_controller.go:360] Finished syncing service dns-default on namespace openshift-dns for network=default : 4.127302ms\\\\nI0320 15:41:04.407620 6925 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0320 15:41:04.407658 6925 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0320 15:41:04.407675 6925 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0320 15:41:04.407733 6925 factory.go:1336] Added *v1.Node event handler 7\\\\nI0320 15:41:04.407751 6925 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0320 15:41:04.408006 6925 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0320 15:41:04.408084 6925 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0320 15:41:04.408106 6925 ovnkube.go:599] Stopped ovnkube\\\\nI0320 15:41:04.408148 6925 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0320 15:41:04.408227 6925 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:41:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qj97f_openshift-ovn-kubernetes(c4b4e0e8-af33-491e-b1d1-31079d90c656)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:11Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:11 crc kubenswrapper[4730]: I0320 15:41:11.611907 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eed0424-45fd-4b1e-8b59-d041af7fb08f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://620070760ce503ee2102ce0880913637feb032124892ce1a1e2060939f38e050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b34522460ebd4556ce4291e5c5132788387cf45b0be3b9535af9262948b71ac\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:27Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 15:39:03.602145 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 15:39:03.606021 1 observer_polling.go:159] Starting file observer\\\\nI0320 15:39:03.640071 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 15:39:03.644835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 15:39:27.437935 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 15:39:27.438079 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:27Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aee2dcf43ecf6df4a1615aa6e468921053ccb529d3c6dbc2c2ad641e264e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99093fe46696a888b221d24d1b42226d0ff16bab6b3fb2a718d055cf97066a69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://899dbd6715433cfe5141851019e164daea952552c26706648245fd6319168685\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:11Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:11 crc kubenswrapper[4730]: E0320 15:41:11.632741 4730 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 15:41:11 crc kubenswrapper[4730]: I0320 15:41:11.637524 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f8a35dfa17acf7f7051d13b20cab55fe91645c8fa2773fed67baddae164b586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:11Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:11 crc kubenswrapper[4730]: I0320 15:41:11.657438 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:11Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:11 crc kubenswrapper[4730]: I0320 15:41:11.674819 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:11Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:11 crc kubenswrapper[4730]: I0320 15:41:11.688727 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:11Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:11 crc kubenswrapper[4730]: I0320 15:41:11.704104 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:11Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:11 crc kubenswrapper[4730]: I0320 15:41:11.718124 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e20a286d6affcba7ffa950ef5386e7f439c9a02381cb8b7d3bc51ad9c4f343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:11Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:11 crc kubenswrapper[4730]: I0320 15:41:11.731433 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35fa36f52355ad1fd24bd332fc5b0a5ff35ac2be71dd16b53449d9f47e48e8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:11Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:11 crc kubenswrapper[4730]: I0320 15:41:11.749296 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34c742c6da6c6d35f815901234a0c12c9628d22fa83e511bddc78eae4373cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:11Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:11 crc kubenswrapper[4730]: I0320 15:41:11.772092 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:11Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:11 crc kubenswrapper[4730]: I0320 15:41:11.786527 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:11Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:11 crc kubenswrapper[4730]: I0320 15:41:11.797820 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://055b0251e049abda93d63985b27520b79b2596443df06427dd990fc974b52a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:11Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:11 crc kubenswrapper[4730]: I0320 15:41:11.808292 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:11Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:11 crc kubenswrapper[4730]: I0320 15:41:11.820449 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:11Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:11 crc kubenswrapper[4730]: I0320 15:41:11.834212 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:11Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:11 crc kubenswrapper[4730]: I0320 15:41:11.846098 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:11Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:12 crc kubenswrapper[4730]: I0320 15:41:12.532703 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:12 crc kubenswrapper[4730]: I0320 15:41:12.532761 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:12 crc kubenswrapper[4730]: E0320 15:41:12.533438 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:12 crc kubenswrapper[4730]: I0320 15:41:12.532836 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:12 crc kubenswrapper[4730]: E0320 15:41:12.533557 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:12 crc kubenswrapper[4730]: I0320 15:41:12.532820 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:12 crc kubenswrapper[4730]: E0320 15:41:12.533208 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:12 crc kubenswrapper[4730]: E0320 15:41:12.533660 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.219218 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.219304 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.219325 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.219348 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.219365 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:14Z","lastTransitionTime":"2026-03-20T15:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:14 crc kubenswrapper[4730]: E0320 15:41:14.240672 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:14Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.246050 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.246096 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.246112 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.246135 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.246149 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:14Z","lastTransitionTime":"2026-03-20T15:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:14 crc kubenswrapper[4730]: E0320 15:41:14.265788 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:14Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.272771 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.272823 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.272841 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.272864 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.272880 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:14Z","lastTransitionTime":"2026-03-20T15:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:14 crc kubenswrapper[4730]: E0320 15:41:14.292891 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:14Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.298550 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.298650 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.298671 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.298697 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.298715 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:14Z","lastTransitionTime":"2026-03-20T15:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:14 crc kubenswrapper[4730]: E0320 15:41:14.316983 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:14Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.321668 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.321725 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.321746 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.321772 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.321791 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:14Z","lastTransitionTime":"2026-03-20T15:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:14 crc kubenswrapper[4730]: E0320 15:41:14.344810 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:14Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:14 crc kubenswrapper[4730]: E0320 15:41:14.345042 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.532497 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.532592 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.532513 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:14 crc kubenswrapper[4730]: I0320 15:41:14.532638 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:14 crc kubenswrapper[4730]: E0320 15:41:14.532719 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:14 crc kubenswrapper[4730]: E0320 15:41:14.532868 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:14 crc kubenswrapper[4730]: E0320 15:41:14.532946 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:14 crc kubenswrapper[4730]: E0320 15:41:14.533090 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:15 crc kubenswrapper[4730]: I0320 15:41:15.546601 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 20 15:41:16 crc kubenswrapper[4730]: I0320 15:41:16.532124 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:16 crc kubenswrapper[4730]: I0320 15:41:16.532179 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:16 crc kubenswrapper[4730]: I0320 15:41:16.532229 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:16 crc kubenswrapper[4730]: I0320 15:41:16.532124 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:16 crc kubenswrapper[4730]: E0320 15:41:16.532383 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:16 crc kubenswrapper[4730]: E0320 15:41:16.532527 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:16 crc kubenswrapper[4730]: E0320 15:41:16.532652 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:16 crc kubenswrapper[4730]: E0320 15:41:16.532789 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:16 crc kubenswrapper[4730]: I0320 15:41:16.538814 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:41:16 crc kubenswrapper[4730]: I0320 15:41:16.538972 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:16 crc kubenswrapper[4730]: E0320 15:41:16.539015 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:20.538983294 +0000 UTC m=+199.752354703 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:41:16 crc kubenswrapper[4730]: I0320 15:41:16.539131 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:16 crc kubenswrapper[4730]: E0320 15:41:16.539146 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 15:41:16 crc kubenswrapper[4730]: E0320 15:41:16.539239 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 15:41:16 crc kubenswrapper[4730]: I0320 15:41:16.539210 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:16 crc kubenswrapper[4730]: E0320 15:41:16.539318 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:41:16 crc kubenswrapper[4730]: E0320 15:41:16.539369 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 15:41:16 crc kubenswrapper[4730]: E0320 15:41:16.539195 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 15:41:16 crc kubenswrapper[4730]: I0320 15:41:16.539389 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:16 crc kubenswrapper[4730]: E0320 15:41:16.539445 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 15:42:20.539417447 +0000 UTC m=+199.752788846 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 15:41:16 crc kubenswrapper[4730]: E0320 15:41:16.539479 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 15:42:20.539460808 +0000 UTC m=+199.752832267 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 15:41:16 crc kubenswrapper[4730]: E0320 15:41:16.539521 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 15:42:20.53950924 +0000 UTC m=+199.752880649 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:41:16 crc kubenswrapper[4730]: E0320 15:41:16.539521 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 15:41:16 crc kubenswrapper[4730]: E0320 15:41:16.539583 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 15:41:16 crc kubenswrapper[4730]: E0320 15:41:16.539597 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:41:16 crc kubenswrapper[4730]: E0320 15:41:16.539658 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 15:42:20.539646254 +0000 UTC m=+199.753017663 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:41:16 crc kubenswrapper[4730]: E0320 15:41:16.634648 4730 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 15:41:16 crc kubenswrapper[4730]: I0320 15:41:16.741211 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs\") pod \"network-metrics-daemon-2prfn\" (UID: \"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\") " pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:16 crc kubenswrapper[4730]: E0320 15:41:16.741415 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 15:41:16 crc kubenswrapper[4730]: E0320 15:41:16.741509 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs podName:db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a nodeName:}" failed. No retries permitted until 2026-03-20 15:42:20.741485562 +0000 UTC m=+199.954856961 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs") pod "network-metrics-daemon-2prfn" (UID: "db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 15:41:18 crc kubenswrapper[4730]: I0320 15:41:18.532955 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:18 crc kubenswrapper[4730]: I0320 15:41:18.533053 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:18 crc kubenswrapper[4730]: E0320 15:41:18.533083 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:18 crc kubenswrapper[4730]: I0320 15:41:18.533117 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:18 crc kubenswrapper[4730]: I0320 15:41:18.533165 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:18 crc kubenswrapper[4730]: E0320 15:41:18.533296 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:18 crc kubenswrapper[4730]: E0320 15:41:18.533441 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:18 crc kubenswrapper[4730]: E0320 15:41:18.533474 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:19 crc kubenswrapper[4730]: I0320 15:41:19.533833 4730 scope.go:117] "RemoveContainer" containerID="03c5491305bb4c7cdbc1498dbb8cc5fe445c665c58d977511f19cb13251f7ef0" Mar 20 15:41:19 crc kubenswrapper[4730]: E0320 15:41:19.534064 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qj97f_openshift-ovn-kubernetes(c4b4e0e8-af33-491e-b1d1-31079d90c656)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" Mar 20 15:41:20 crc kubenswrapper[4730]: I0320 15:41:20.532953 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:20 crc kubenswrapper[4730]: I0320 15:41:20.533032 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:20 crc kubenswrapper[4730]: E0320 15:41:20.533104 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:20 crc kubenswrapper[4730]: I0320 15:41:20.533050 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:20 crc kubenswrapper[4730]: E0320 15:41:20.533272 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:20 crc kubenswrapper[4730]: I0320 15:41:20.533344 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:20 crc kubenswrapper[4730]: E0320 15:41:20.533492 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:20 crc kubenswrapper[4730]: E0320 15:41:20.533614 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:21 crc kubenswrapper[4730]: I0320 15:41:21.553177 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf1c50b-d896-46ac-8c1c-2368a862eb88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2006b33d30cf2fd57843f3df0fb087253dd116f48a4d807c31260ce7508b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badcec1b25a9d088fe7e563366ee7568adcabfe9c29a536db19fe3119b10f229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7f50a49e995c2647a19bd3dedd3ca85f1d7d0279df106c153af39641af9ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e8cd87f56c4a70c698163de2d3f364420981943d389a3cc9b64401bb5fbf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e8cd87f56c4a70c698163de2d3f364420981943d389a3cc9b64401bb5fbf08e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:21Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:21 crc kubenswrapper[4730]: I0320 15:41:21.571824 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:21Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:21 crc kubenswrapper[4730]: I0320 15:41:21.628343 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://055b0251e049abda93d63985b27520b79b2596443df06427dd990fc974b52a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:21Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:21 crc kubenswrapper[4730]: E0320 15:41:21.635996 4730 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 15:41:21 crc kubenswrapper[4730]: I0320 15:41:21.659194 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:21Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:21 crc kubenswrapper[4730]: I0320 15:41:21.675650 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:21Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:21 crc kubenswrapper[4730]: I0320 15:41:21.688582 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:21Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:21 crc kubenswrapper[4730]: I0320 15:41:21.701467 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eed0424-45fd-4b1e-8b59-d041af7fb08f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://620070760ce503ee2102ce0880913637feb032124892ce1a1e2060939f38e050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b34522460ebd4556ce4291e5c5132788387cf45b0be3b9535af9262948b71ac\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:27Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 15:39:03.602145 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 15:39:03.606021 1 observer_polling.go:159] Starting file observer\\\\nI0320 15:39:03.640071 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 15:39:03.644835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 15:39:27.437935 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 15:39:27.438079 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:27Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aee2dcf43ecf6df4a1615aa6e468921053ccb529d3c6dbc2c2ad641e264e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99093fe46696a888b221d24d1b42226d0ff16bab6b3fb2a718d055cf97066a69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://899dbd6715433cfe5141851019e164daea952552c26706648245fd6319168685\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:21Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:21 crc kubenswrapper[4730]: I0320 15:41:21.714500 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f8a35dfa17acf7f7051d13b20cab55fe91645c8fa2773fed67baddae164b586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:21Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:21 crc kubenswrapper[4730]: I0320 15:41:21.731161 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:21Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:21 crc kubenswrapper[4730]: I0320 15:41:21.751923 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c5491305bb4c7cdbc1498dbb8cc5fe445c665c58d977511f19cb13251f7ef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03c5491305bb4c7cdbc1498dbb8cc5fe445c665c58d977511f19cb13251f7ef0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\" LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}\\\\nI0320 15:41:04.407421 6925 services_controller.go:360] Finished syncing service dns-default on namespace openshift-dns for network=default : 4.127302ms\\\\nI0320 15:41:04.407620 6925 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0320 15:41:04.407658 6925 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0320 15:41:04.407675 6925 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0320 15:41:04.407733 6925 factory.go:1336] Added *v1.Node event handler 7\\\\nI0320 15:41:04.407751 6925 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0320 15:41:04.408006 6925 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0320 15:41:04.408084 6925 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0320 15:41:04.408106 6925 ovnkube.go:599] Stopped ovnkube\\\\nI0320 15:41:04.408148 6925 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0320 15:41:04.408227 6925 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:41:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qj97f_openshift-ovn-kubernetes(c4b4e0e8-af33-491e-b1d1-31079d90c656)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:21Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:21 crc kubenswrapper[4730]: I0320 15:41:21.765817 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:21Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:21 crc kubenswrapper[4730]: I0320 15:41:21.784745 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:21Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:21 crc kubenswrapper[4730]: I0320 15:41:21.801239 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34c742c6da6c6d35f815901234a0c12c9628d22fa83e511bddc78eae4373cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:21Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:21 crc kubenswrapper[4730]: I0320 15:41:21.827644 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:21Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:21 crc kubenswrapper[4730]: I0320 15:41:21.842545 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:21Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:21 crc kubenswrapper[4730]: I0320 15:41:21.856099 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:21Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:21 crc kubenswrapper[4730]: I0320 15:41:21.875079 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:21Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:21 crc kubenswrapper[4730]: I0320 15:41:21.887551 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e20a286d6affcba7ffa950ef5386e7f439c9a02381cb8b7d3bc51ad9c4f343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:21Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:21 crc kubenswrapper[4730]: I0320 15:41:21.898769 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35fa36f52355ad1fd24bd332fc5b0a5ff35ac2be71dd16b53449d9f47e48e8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:21Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:22 crc kubenswrapper[4730]: I0320 15:41:22.532227 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:22 crc kubenswrapper[4730]: I0320 15:41:22.532308 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:22 crc kubenswrapper[4730]: E0320 15:41:22.532931 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:22 crc kubenswrapper[4730]: E0320 15:41:22.533146 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:22 crc kubenswrapper[4730]: I0320 15:41:22.532330 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:22 crc kubenswrapper[4730]: E0320 15:41:22.533356 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:22 crc kubenswrapper[4730]: I0320 15:41:22.532390 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:22 crc kubenswrapper[4730]: E0320 15:41:22.533576 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.392354 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.392400 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.392412 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.392426 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.392437 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:24Z","lastTransitionTime":"2026-03-20T15:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:24 crc kubenswrapper[4730]: E0320 15:41:24.412693 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:24Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.417772 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.417804 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.417813 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.417826 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.417838 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:24Z","lastTransitionTime":"2026-03-20T15:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:24 crc kubenswrapper[4730]: E0320 15:41:24.432333 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:24Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.436590 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.436632 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.436640 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.436654 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.436663 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:24Z","lastTransitionTime":"2026-03-20T15:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:24 crc kubenswrapper[4730]: E0320 15:41:24.455768 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:24Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.459723 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.459772 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.459785 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.459800 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.459811 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:24Z","lastTransitionTime":"2026-03-20T15:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:24 crc kubenswrapper[4730]: E0320 15:41:24.481801 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:24Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.487213 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.487268 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.487280 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.487295 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.487307 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:24Z","lastTransitionTime":"2026-03-20T15:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:24 crc kubenswrapper[4730]: E0320 15:41:24.505882 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:24Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:24 crc kubenswrapper[4730]: E0320 15:41:24.506588 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.532268 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.532322 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.532277 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:24 crc kubenswrapper[4730]: E0320 15:41:24.532436 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:24 crc kubenswrapper[4730]: E0320 15:41:24.532546 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:24 crc kubenswrapper[4730]: E0320 15:41:24.532709 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:24 crc kubenswrapper[4730]: I0320 15:41:24.532805 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:24 crc kubenswrapper[4730]: E0320 15:41:24.533323 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:26 crc kubenswrapper[4730]: I0320 15:41:26.532023 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:26 crc kubenswrapper[4730]: I0320 15:41:26.532029 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:26 crc kubenswrapper[4730]: I0320 15:41:26.532028 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:26 crc kubenswrapper[4730]: E0320 15:41:26.532221 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:26 crc kubenswrapper[4730]: E0320 15:41:26.532413 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:26 crc kubenswrapper[4730]: I0320 15:41:26.532539 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:26 crc kubenswrapper[4730]: E0320 15:41:26.532645 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:26 crc kubenswrapper[4730]: E0320 15:41:26.532750 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:26 crc kubenswrapper[4730]: E0320 15:41:26.637897 4730 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 15:41:27 crc kubenswrapper[4730]: I0320 15:41:27.187093 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6r2kn_6f97b1f1-1fad-44ec-8253-17dd6a5eee54/kube-multus/0.log" Mar 20 15:41:27 crc kubenswrapper[4730]: I0320 15:41:27.187173 4730 generic.go:334] "Generic (PLEG): container finished" podID="6f97b1f1-1fad-44ec-8253-17dd6a5eee54" containerID="f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6" exitCode=1 Mar 20 15:41:27 crc kubenswrapper[4730]: I0320 15:41:27.187217 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6r2kn" event={"ID":"6f97b1f1-1fad-44ec-8253-17dd6a5eee54","Type":"ContainerDied","Data":"f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6"} Mar 20 15:41:27 crc kubenswrapper[4730]: I0320 15:41:27.187839 4730 scope.go:117] "RemoveContainer" containerID="f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6" Mar 20 15:41:27 crc kubenswrapper[4730]: I0320 15:41:27.204499 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf1c50b-d896-46ac-8c1c-2368a862eb88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2006b33d30cf2fd57843f3df0fb087253dd116f48a4d807c31260ce7508b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badcec1b25a9d088fe7e563366ee7568adcabfe9c29a536db19fe3119b10f229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7f50a49e995c2647a19bd3dedd3ca85f1d7d0279df106c153af39641af9ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e8cd87f56c4a70c698163de2d3f364420981943d389a3cc9b64401bb5fbf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e8cd87f56c4a70c698163de2d3f364420981943d389a3cc9b64401bb5fbf08e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:27Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:27 crc kubenswrapper[4730]: I0320 15:41:27.220740 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:27Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:27 crc kubenswrapper[4730]: I0320 15:41:27.234870 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://055b0251e049abda93d63985b27520b79b2596443df06427dd990fc974b52a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:27Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:27 crc kubenswrapper[4730]: I0320 15:41:27.253013 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:27Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:27 crc kubenswrapper[4730]: I0320 15:41:27.269109 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:41:27Z\\\",\\\"message\\\":\\\"2026-03-20T15:40:41+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5b4f3207-7d26-40ce-9cb0-76c6f07e26b9\\\\n2026-03-20T15:40:41+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5b4f3207-7d26-40ce-9cb0-76c6f07e26b9 to /host/opt/cni/bin/\\\\n2026-03-20T15:40:42Z [verbose] multus-daemon started\\\\n2026-03-20T15:40:42Z [verbose] Readiness Indicator file check\\\\n2026-03-20T15:41:27Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:27Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:27 crc kubenswrapper[4730]: I0320 15:41:27.282279 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:27Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:27 crc kubenswrapper[4730]: I0320 15:41:27.296975 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eed0424-45fd-4b1e-8b59-d041af7fb08f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://620070760ce503ee2102ce0880913637feb032124892ce1a1e2060939f38e050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b34522460ebd4556ce4291e5c5132788387cf45b0be3b9535af9262948b71ac\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:27Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 15:39:03.602145 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 15:39:03.606021 1 observer_polling.go:159] Starting file observer\\\\nI0320 15:39:03.640071 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 15:39:03.644835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 15:39:27.437935 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 15:39:27.438079 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:27Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aee2dcf43ecf6df4a1615aa6e468921053ccb529d3c6dbc2c2ad641e264e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99093fe46696a888b221d24d1b42226d0ff16bab6b3fb2a718d055cf97066a69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://899dbd6715433cfe5141851019e164daea952552c26706648245fd6319168685\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:27Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:27 crc kubenswrapper[4730]: I0320 15:41:27.314573 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f8a35dfa17acf7f7051d13b20cab55fe91645c8fa2773fed67baddae164b586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:27Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:27 crc kubenswrapper[4730]: I0320 15:41:27.328439 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:27Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:27 crc kubenswrapper[4730]: I0320 15:41:27.349540 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c5491305bb4c7cdbc1498dbb8cc5fe445c665c58d977511f19cb13251f7ef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03c5491305bb4c7cdbc1498dbb8cc5fe445c665c58d977511f19cb13251f7ef0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\" LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}\\\\nI0320 15:41:04.407421 6925 services_controller.go:360] Finished syncing service dns-default on namespace openshift-dns for network=default : 4.127302ms\\\\nI0320 15:41:04.407620 6925 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0320 15:41:04.407658 6925 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0320 15:41:04.407675 6925 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0320 15:41:04.407733 6925 factory.go:1336] Added *v1.Node event handler 7\\\\nI0320 15:41:04.407751 6925 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0320 15:41:04.408006 6925 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0320 15:41:04.408084 6925 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0320 15:41:04.408106 6925 ovnkube.go:599] Stopped ovnkube\\\\nI0320 15:41:04.408148 6925 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0320 15:41:04.408227 6925 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:41:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qj97f_openshift-ovn-kubernetes(c4b4e0e8-af33-491e-b1d1-31079d90c656)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:27Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:27 crc kubenswrapper[4730]: I0320 15:41:27.363663 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:27Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:27 crc kubenswrapper[4730]: I0320 15:41:27.375021 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:27Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:27 crc kubenswrapper[4730]: I0320 15:41:27.393596 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:27Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:27 crc kubenswrapper[4730]: I0320 15:41:27.407849 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:27Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:27 crc kubenswrapper[4730]: I0320 15:41:27.417789 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:27Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:27 crc kubenswrapper[4730]: I0320 15:41:27.429723 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:27Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:27 crc kubenswrapper[4730]: I0320 15:41:27.441116 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e20a286d6affcba7ffa950ef5386e7f439c9a02381cb8b7d3bc51ad9c4f343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:27Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:27 crc kubenswrapper[4730]: I0320 15:41:27.449532 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35fa36f52355ad1fd24bd332fc5b0a5ff35ac2be71dd16b53449d9f47e48e8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:27Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:27 crc kubenswrapper[4730]: I0320 15:41:27.461039 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34c742c6da6c6d35f815901234a0c12c9628d22fa83e511bddc78eae4373cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:27Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:28 crc kubenswrapper[4730]: I0320 15:41:28.192372 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6r2kn_6f97b1f1-1fad-44ec-8253-17dd6a5eee54/kube-multus/0.log" Mar 20 15:41:28 crc kubenswrapper[4730]: I0320 15:41:28.192446 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6r2kn" event={"ID":"6f97b1f1-1fad-44ec-8253-17dd6a5eee54","Type":"ContainerStarted","Data":"12ba423ea0fecce8b2416cc8f75f3323980aae80a20ff26bd2f9a6c4cd464812"} Mar 20 15:41:28 crc kubenswrapper[4730]: I0320 15:41:28.205558 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:28Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:28 crc kubenswrapper[4730]: I0320 15:41:28.223203 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf1c50b-d896-46ac-8c1c-2368a862eb88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2006b33d30cf2fd57843f3df0fb087253dd116f48a4d807c31260ce7508b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badcec1b25a9d088fe7e563366ee7568adcabfe9c29a536db19fe3119b10f229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7f50a49e995c2647a19bd3dedd3ca85f1d7d0279df106c153af39641af9ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e8cd87f56c4a70c698163de2d3f364420981943d389a3cc9b64401bb5fbf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e8cd87f56c4a70c698163de2d3f364420981943d389a3cc9b64401bb5fbf08e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:28Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:28 crc kubenswrapper[4730]: I0320 15:41:28.238910 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:28Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:28 crc kubenswrapper[4730]: I0320 15:41:28.257307 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://055b0251e049abda93d63985b27520b79b2596443df06427dd990fc974b52a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:28Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:28 crc kubenswrapper[4730]: I0320 15:41:28.274088 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:28Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:28 crc kubenswrapper[4730]: I0320 15:41:28.290820 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12ba423ea0fecce8b2416cc8f75f3323980aae80a20ff26bd2f9a6c4cd464812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:41:27Z\\\",\\\"message\\\":\\\"2026-03-20T15:40:41+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5b4f3207-7d26-40ce-9cb0-76c6f07e26b9\\\\n2026-03-20T15:40:41+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5b4f3207-7d26-40ce-9cb0-76c6f07e26b9 to /host/opt/cni/bin/\\\\n2026-03-20T15:40:42Z [verbose] multus-daemon started\\\\n2026-03-20T15:40:42Z [verbose] Readiness Indicator file check\\\\n2026-03-20T15:41:27Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:28Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:28 crc kubenswrapper[4730]: I0320 15:41:28.309440 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eed0424-45fd-4b1e-8b59-d041af7fb08f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://620070760ce503ee2102ce0880913637feb032124892ce1a1e2060939f38e050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b34522460ebd4556ce4291e5c5132788387cf45b0be3b9535af9262948b71ac\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:27Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 15:39:03.602145 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 15:39:03.606021 1 observer_polling.go:159] Starting file observer\\\\nI0320 15:39:03.640071 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 15:39:03.644835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 15:39:27.437935 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 15:39:27.438079 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:27Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aee2dcf43ecf6df4a1615aa6e468921053ccb529d3c6dbc2c2ad641e264e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99093fe46696a888b221d24d1b42226d0ff16bab6b3fb2a718d055cf97066a69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://899dbd6715433cfe5141851019e164daea952552c26706648245fd6319168685\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:28Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:28 crc kubenswrapper[4730]: I0320 15:41:28.322922 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f8a35dfa17acf7f7051d13b20cab55fe91645c8fa2773fed67baddae164b586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:28Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:28 crc kubenswrapper[4730]: I0320 15:41:28.335328 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:28Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:28 crc kubenswrapper[4730]: I0320 15:41:28.355767 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c5491305bb4c7cdbc1498dbb8cc5fe445c665c58d977511f19cb13251f7ef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03c5491305bb4c7cdbc1498dbb8cc5fe445c665c58d977511f19cb13251f7ef0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\" LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}\\\\nI0320 15:41:04.407421 6925 services_controller.go:360] Finished syncing service dns-default on namespace openshift-dns for network=default : 4.127302ms\\\\nI0320 15:41:04.407620 6925 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0320 15:41:04.407658 6925 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0320 15:41:04.407675 6925 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0320 15:41:04.407733 6925 factory.go:1336] Added *v1.Node event handler 7\\\\nI0320 15:41:04.407751 6925 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0320 15:41:04.408006 6925 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0320 15:41:04.408084 6925 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0320 15:41:04.408106 6925 ovnkube.go:599] Stopped ovnkube\\\\nI0320 15:41:04.408148 6925 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0320 15:41:04.408227 6925 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:41:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qj97f_openshift-ovn-kubernetes(c4b4e0e8-af33-491e-b1d1-31079d90c656)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:28Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:28 crc kubenswrapper[4730]: I0320 15:41:28.369909 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:28Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:28 crc kubenswrapper[4730]: I0320 15:41:28.383896 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:28Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:28 crc kubenswrapper[4730]: I0320 15:41:28.395312 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35fa36f52355ad1fd24bd332fc5b0a5ff35ac2be71dd16b53449d9f47e48e8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:28Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:28 crc kubenswrapper[4730]: I0320 15:41:28.408108 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34c742c6da6c6d35f815901234a0c12c9628d22fa83e511bddc78eae4373cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:28Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:28 crc kubenswrapper[4730]: I0320 15:41:28.429159 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:28Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:28 crc kubenswrapper[4730]: I0320 15:41:28.444485 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:28Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:28 crc kubenswrapper[4730]: I0320 15:41:28.455712 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:28Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:28 crc kubenswrapper[4730]: I0320 15:41:28.472074 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:28Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:28 crc kubenswrapper[4730]: I0320 15:41:28.484574 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e20a286d6affcba7ffa950ef5386e7f439c9a02381cb8b7d3bc51ad9c4f343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:28Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:28 crc kubenswrapper[4730]: I0320 15:41:28.532917 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:28 crc kubenswrapper[4730]: I0320 15:41:28.532917 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:28 crc kubenswrapper[4730]: I0320 15:41:28.532952 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:28 crc kubenswrapper[4730]: I0320 15:41:28.532980 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:28 crc kubenswrapper[4730]: E0320 15:41:28.533644 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:28 crc kubenswrapper[4730]: E0320 15:41:28.533767 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:28 crc kubenswrapper[4730]: E0320 15:41:28.533868 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:28 crc kubenswrapper[4730]: E0320 15:41:28.533967 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:30 crc kubenswrapper[4730]: I0320 15:41:30.532180 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:30 crc kubenswrapper[4730]: E0320 15:41:30.532327 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:30 crc kubenswrapper[4730]: I0320 15:41:30.532180 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:30 crc kubenswrapper[4730]: I0320 15:41:30.532199 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:30 crc kubenswrapper[4730]: I0320 15:41:30.532187 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:30 crc kubenswrapper[4730]: E0320 15:41:30.532699 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:30 crc kubenswrapper[4730]: E0320 15:41:30.532823 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:30 crc kubenswrapper[4730]: E0320 15:41:30.532920 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:31 crc kubenswrapper[4730]: I0320 15:41:31.533663 4730 scope.go:117] "RemoveContainer" containerID="03c5491305bb4c7cdbc1498dbb8cc5fe445c665c58d977511f19cb13251f7ef0" Mar 20 15:41:31 crc kubenswrapper[4730]: I0320 15:41:31.549382 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:31Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:31 crc kubenswrapper[4730]: I0320 15:41:31.561231 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:31Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:31 crc kubenswrapper[4730]: I0320 15:41:31.588013 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:31Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:31 crc kubenswrapper[4730]: I0320 15:41:31.610607 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:31Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:31 crc kubenswrapper[4730]: I0320 15:41:31.624239 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:31Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:31 crc kubenswrapper[4730]: I0320 15:41:31.637880 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:31Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:31 crc kubenswrapper[4730]: E0320 15:41:31.639051 4730 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 15:41:31 crc kubenswrapper[4730]: I0320 15:41:31.649682 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e20a286d6affcba7ffa950ef5386e7f439c9a02381cb8b7d3bc51ad9c4f343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:31Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:31 crc kubenswrapper[4730]: I0320 15:41:31.659320 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35fa36f52355ad1fd24bd332fc5b0a5ff35ac2be71dd16b53449d9f47e48e8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:31Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:31 crc kubenswrapper[4730]: I0320 15:41:31.672988 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34c742c6da6c6d35f815901234a0c12c9628d22fa83e511bddc78eae4373cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:31Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:31 crc kubenswrapper[4730]: I0320 15:41:31.685189 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf1c50b-d896-46ac-8c1c-2368a862eb88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2006b33d30cf2fd57843f3df0fb087253dd116f48a4d807c31260ce7508b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badcec1b25a9d088fe7e563366ee7568adcabfe9c29a536db19fe3119b10f229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7f50a49e995c2647a19bd3dedd3ca85f1d7d0279df106c153af39641af9ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e8cd87f56c4a70c698163de2d3f364420981943d389a3cc9b64401bb5fbf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e8cd87f56c4a70c698163de2d3f364420981943d389a3cc9b64401bb5fbf08e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:31Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:31 crc kubenswrapper[4730]: I0320 15:41:31.694606 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:31Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:31 crc kubenswrapper[4730]: I0320 15:41:31.704657 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://055b0251e049abda93d63985b27520b79b2596443df06427dd990fc974b52a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:31Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:31 crc kubenswrapper[4730]: I0320 15:41:31.717115 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:31Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:31 crc kubenswrapper[4730]: I0320 15:41:31.728980 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12ba423ea0fecce8b2416cc8f75f3323980aae80a20ff26bd2f9a6c4cd464812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:41:27Z\\\",\\\"message\\\":\\\"2026-03-20T15:40:41+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5b4f3207-7d26-40ce-9cb0-76c6f07e26b9\\\\n2026-03-20T15:40:41+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5b4f3207-7d26-40ce-9cb0-76c6f07e26b9 to /host/opt/cni/bin/\\\\n2026-03-20T15:40:42Z [verbose] multus-daemon started\\\\n2026-03-20T15:40:42Z [verbose] Readiness Indicator file check\\\\n2026-03-20T15:41:27Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:31Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:31 crc kubenswrapper[4730]: I0320 15:41:31.740182 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:31Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:31 crc kubenswrapper[4730]: I0320 15:41:31.752802 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eed0424-45fd-4b1e-8b59-d041af7fb08f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://620070760ce503ee2102ce0880913637feb032124892ce1a1e2060939f38e050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b34522460ebd4556ce4291e5c5132788387cf45b0be3b9535af9262948b71ac\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:27Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 15:39:03.602145 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 15:39:03.606021 1 observer_polling.go:159] Starting file observer\\\\nI0320 15:39:03.640071 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 15:39:03.644835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 15:39:27.437935 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 15:39:27.438079 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:27Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aee2dcf43ecf6df4a1615aa6e468921053ccb529d3c6dbc2c2ad641e264e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99093fe46696a888b221d24d1b42226d0ff16bab6b3fb2a718d055cf97066a69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://899dbd6715433cfe5141851019e164daea952552c26706648245fd6319168685\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:31Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:31 crc kubenswrapper[4730]: I0320 15:41:31.764855 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f8a35dfa17acf7f7051d13b20cab55fe91645c8fa2773fed67baddae164b586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:31Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:31 crc kubenswrapper[4730]: I0320 15:41:31.775073 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:31Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:31 crc kubenswrapper[4730]: I0320 15:41:31.789746 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c5491305bb4c7cdbc1498dbb8cc5fe445c665c58d977511f19cb13251f7ef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03c5491305bb4c7cdbc1498dbb8cc5fe445c665c58d977511f19cb13251f7ef0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\" LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}\\\\nI0320 15:41:04.407421 6925 services_controller.go:360] Finished syncing service dns-default on namespace openshift-dns for network=default : 4.127302ms\\\\nI0320 15:41:04.407620 6925 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0320 15:41:04.407658 6925 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0320 15:41:04.407675 6925 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0320 15:41:04.407733 6925 factory.go:1336] Added *v1.Node event handler 7\\\\nI0320 15:41:04.407751 6925 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0320 15:41:04.408006 6925 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0320 15:41:04.408084 6925 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0320 15:41:04.408106 6925 ovnkube.go:599] Stopped ovnkube\\\\nI0320 15:41:04.408148 6925 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0320 15:41:04.408227 6925 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:41:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qj97f_openshift-ovn-kubernetes(c4b4e0e8-af33-491e-b1d1-31079d90c656)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:31Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:32 crc kubenswrapper[4730]: I0320 15:41:32.205822 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj97f_c4b4e0e8-af33-491e-b1d1-31079d90c656/ovnkube-controller/2.log" Mar 20 15:41:32 crc kubenswrapper[4730]: I0320 15:41:32.208016 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" event={"ID":"c4b4e0e8-af33-491e-b1d1-31079d90c656","Type":"ContainerStarted","Data":"7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed"} Mar 20 15:41:32 crc kubenswrapper[4730]: I0320 15:41:32.208451 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:41:32 crc kubenswrapper[4730]: I0320 15:41:32.220030 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:32Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:32 crc kubenswrapper[4730]: I0320 15:41:32.228028 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:32Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:32 crc kubenswrapper[4730]: I0320 15:41:32.237302 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:32Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:32 crc kubenswrapper[4730]: I0320 15:41:32.250184 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:32Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:32 crc kubenswrapper[4730]: I0320 15:41:32.266237 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e20a286d6affcba7ffa950ef5386e7f439c9a02381cb8b7d3bc51ad9c4f343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:32Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:32 crc kubenswrapper[4730]: I0320 15:41:32.277049 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35fa36f52355ad1fd24bd332fc5b0a5ff35ac2be71dd16b53449d9f47e48e8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:32Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:32 crc kubenswrapper[4730]: I0320 15:41:32.290701 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34c742c6da6c6d35f815901234a0c12c9628d22fa83e511bddc78eae4373cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:32Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:32 crc kubenswrapper[4730]: I0320 15:41:32.312323 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:32Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:32 crc kubenswrapper[4730]: I0320 15:41:32.326509 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:32Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:32 crc kubenswrapper[4730]: I0320 15:41:32.338636 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://055b0251e049abda93d63985b27520b79b2596443df06427dd990fc974b52a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:32Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:32 crc kubenswrapper[4730]: I0320 15:41:32.348623 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:32Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:32 crc kubenswrapper[4730]: I0320 15:41:32.429614 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12ba423ea0fecce8b2416cc8f75f3323980aae80a20ff26bd2f9a6c4cd464812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:41:27Z\\\",\\\"message\\\":\\\"2026-03-20T15:40:41+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5b4f3207-7d26-40ce-9cb0-76c6f07e26b9\\\\n2026-03-20T15:40:41+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5b4f3207-7d26-40ce-9cb0-76c6f07e26b9 to /host/opt/cni/bin/\\\\n2026-03-20T15:40:42Z [verbose] multus-daemon started\\\\n2026-03-20T15:40:42Z [verbose] Readiness Indicator file check\\\\n2026-03-20T15:41:27Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:32Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:32 crc kubenswrapper[4730]: I0320 15:41:32.441162 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:32Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:32 crc kubenswrapper[4730]: I0320 15:41:32.452616 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf1c50b-d896-46ac-8c1c-2368a862eb88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2006b33d30cf2fd57843f3df0fb087253dd116f48a4d807c31260ce7508b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badcec1b25a9d088fe7e563366ee7568adcabfe9c29a536db19fe3119b10f229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7f50a49e995c2647a19bd3dedd3ca85f1d7d0279df106c153af39641af9ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e8cd87f56c4a70c698163de2d3f364420981943d389a3cc9b64401bb5fbf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e8cd87f56c4a70c698163de2d3f364420981943d389a3cc9b64401bb5fbf08e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:32Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:32 crc kubenswrapper[4730]: I0320 15:41:32.462115 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:32Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:32 crc kubenswrapper[4730]: I0320 15:41:32.474041 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:32Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:32 crc kubenswrapper[4730]: I0320 15:41:32.494949 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03c5491305bb4c7cdbc1498dbb8cc5fe445c665c58d977511f19cb13251f7ef0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\" LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}\\\\nI0320 15:41:04.407421 6925 services_controller.go:360] Finished syncing service dns-default on namespace openshift-dns for network=default : 4.127302ms\\\\nI0320 15:41:04.407620 6925 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0320 15:41:04.407658 6925 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0320 15:41:04.407675 6925 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0320 15:41:04.407733 6925 factory.go:1336] Added *v1.Node event handler 7\\\\nI0320 15:41:04.407751 6925 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0320 15:41:04.408006 6925 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0320 15:41:04.408084 6925 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0320 15:41:04.408106 6925 ovnkube.go:599] Stopped ovnkube\\\\nI0320 15:41:04.408148 6925 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0320 15:41:04.408227 6925 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:41:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:32Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:32 crc kubenswrapper[4730]: I0320 15:41:32.506945 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eed0424-45fd-4b1e-8b59-d041af7fb08f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://620070760ce503ee2102ce0880913637feb032124892ce1a1e2060939f38e050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b34522460ebd4556ce4291e5c5132788387cf45b0be3b9535af9262948b71ac\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:27Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 15:39:03.602145 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 15:39:03.606021 1 observer_polling.go:159] Starting file observer\\\\nI0320 15:39:03.640071 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 15:39:03.644835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 15:39:27.437935 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 15:39:27.438079 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:27Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aee2dcf43ecf6df4a1615aa6e468921053ccb529d3c6dbc2c2ad641e264e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99093fe46696a888b221d24d1b42226d0ff16bab6b3fb2a718d055cf97066a69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://899dbd6715433cfe5141851019e164daea952552c26706648245fd6319168685\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:32Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:32 crc kubenswrapper[4730]: I0320 15:41:32.519571 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f8a35dfa17acf7f7051d13b20cab55fe91645c8fa2773fed67baddae164b586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:32Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:32 crc kubenswrapper[4730]: I0320 15:41:32.532965 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:32 crc kubenswrapper[4730]: I0320 15:41:32.532987 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:32 crc kubenswrapper[4730]: I0320 15:41:32.533009 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:32 crc kubenswrapper[4730]: E0320 15:41:32.533678 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:32 crc kubenswrapper[4730]: I0320 15:41:32.533040 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:32 crc kubenswrapper[4730]: E0320 15:41:32.533737 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:32 crc kubenswrapper[4730]: E0320 15:41:32.534651 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:32 crc kubenswrapper[4730]: E0320 15:41:32.538983 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:33 crc kubenswrapper[4730]: I0320 15:41:33.213701 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj97f_c4b4e0e8-af33-491e-b1d1-31079d90c656/ovnkube-controller/3.log" Mar 20 15:41:33 crc kubenswrapper[4730]: I0320 15:41:33.214912 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj97f_c4b4e0e8-af33-491e-b1d1-31079d90c656/ovnkube-controller/2.log" Mar 20 15:41:33 crc kubenswrapper[4730]: I0320 15:41:33.219629 4730 generic.go:334] "Generic (PLEG): container finished" podID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerID="7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed" exitCode=1 Mar 20 15:41:33 crc kubenswrapper[4730]: I0320 15:41:33.219670 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" event={"ID":"c4b4e0e8-af33-491e-b1d1-31079d90c656","Type":"ContainerDied","Data":"7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed"} Mar 20 15:41:33 crc kubenswrapper[4730]: I0320 15:41:33.219706 4730 scope.go:117] "RemoveContainer" containerID="03c5491305bb4c7cdbc1498dbb8cc5fe445c665c58d977511f19cb13251f7ef0" Mar 20 15:41:33 crc kubenswrapper[4730]: I0320 15:41:33.222140 4730 scope.go:117] "RemoveContainer" containerID="7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed" Mar 20 15:41:33 crc kubenswrapper[4730]: E0320 15:41:33.222856 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qj97f_openshift-ovn-kubernetes(c4b4e0e8-af33-491e-b1d1-31079d90c656)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" Mar 20 15:41:33 crc kubenswrapper[4730]: I0320 15:41:33.238694 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eed0424-45fd-4b1e-8b59-d041af7fb08f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://620070760ce503ee2102ce0880913637feb032124892ce1a1e2060939f38e050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b34522460ebd4556ce4291e5c5132788387cf45b0be3b9535af9262948b71ac\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:27Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 15:39:03.602145 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 15:39:03.606021 1 observer_polling.go:159] Starting file observer\\\\nI0320 15:39:03.640071 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 15:39:03.644835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 15:39:27.437935 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 15:39:27.438079 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:27Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aee2dcf43ecf6df4a1615aa6e468921053ccb529d3c6dbc2c2ad641e264e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99093fe46696a888b221d24d1b42226d0ff16bab6b3fb2a718d055cf97066a69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://899dbd6715433cfe5141851019e164daea952552c26706648245fd6319168685\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:33Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:33 crc kubenswrapper[4730]: I0320 15:41:33.255901 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f8a35dfa17acf7f7051d13b20cab55fe91645c8fa2773fed67baddae164b586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:33Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:33 crc kubenswrapper[4730]: I0320 15:41:33.273610 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:33Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:33 crc kubenswrapper[4730]: I0320 15:41:33.295031 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03c5491305bb4c7cdbc1498dbb8cc5fe445c665c58d977511f19cb13251f7ef0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:41:04Z\\\",\\\"message\\\":\\\" LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}\\\\nI0320 15:41:04.407421 6925 services_controller.go:360] Finished syncing service dns-default on namespace openshift-dns for network=default : 4.127302ms\\\\nI0320 15:41:04.407620 6925 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0320 15:41:04.407658 6925 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0320 15:41:04.407675 6925 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0320 15:41:04.407733 6925 factory.go:1336] Added *v1.Node event handler 7\\\\nI0320 15:41:04.407751 6925 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0320 15:41:04.408006 6925 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0320 15:41:04.408084 6925 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0320 15:41:04.408106 6925 ovnkube.go:599] Stopped ovnkube\\\\nI0320 15:41:04.408148 6925 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0320 15:41:04.408227 6925 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:41:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:41:32Z\\\",\\\"message\\\":\\\"44],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0320 15:41:32.432008 7239 lb_config.go:1031] Cluster endpoints for openshift-ingress-operator/metrics for network=default are: map[]\\\\nF0320 15:41:32.432028 7239 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:32Z is after 2025-08-24T17:21:41Z]\\\\nI0320 15:41:32.432031 7239 services_controller.go:443] Built service openshift-ingress-operator/metrics LB cluster-wide confi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:41:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:33Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:33 crc kubenswrapper[4730]: I0320 15:41:33.307517 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:33Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:33 crc kubenswrapper[4730]: I0320 15:41:33.319305 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:33Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:33 crc kubenswrapper[4730]: I0320 15:41:33.330337 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e20a286d6affcba7ffa950ef5386e7f439c9a02381cb8b7d3bc51ad9c4f343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:33Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:33 crc kubenswrapper[4730]: I0320 15:41:33.341338 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35fa36f52355ad1fd24bd332fc5b0a5ff35ac2be71dd16b53449d9f47e48e8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:33Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:33 crc kubenswrapper[4730]: I0320 15:41:33.352997 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34c742c6da6c6d35f815901234a0c12c9628d22fa83e511bddc78eae4373cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:33Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:33 crc kubenswrapper[4730]: I0320 15:41:33.374675 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:33Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:33 crc kubenswrapper[4730]: I0320 15:41:33.388879 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:33Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:33 crc kubenswrapper[4730]: I0320 15:41:33.401798 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:33Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:33 crc kubenswrapper[4730]: I0320 15:41:33.416169 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:33Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:33 crc kubenswrapper[4730]: I0320 15:41:33.433830 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12ba423ea0fecce8b2416cc8f75f3323980aae80a20ff26bd2f9a6c4cd464812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:41:27Z\\\",\\\"message\\\":\\\"2026-03-20T15:40:41+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5b4f3207-7d26-40ce-9cb0-76c6f07e26b9\\\\n2026-03-20T15:40:41+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5b4f3207-7d26-40ce-9cb0-76c6f07e26b9 to /host/opt/cni/bin/\\\\n2026-03-20T15:40:42Z [verbose] multus-daemon started\\\\n2026-03-20T15:40:42Z [verbose] Readiness Indicator file check\\\\n2026-03-20T15:41:27Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:33Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:33 crc kubenswrapper[4730]: I0320 15:41:33.448863 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:33Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:33 crc kubenswrapper[4730]: I0320 15:41:33.463923 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf1c50b-d896-46ac-8c1c-2368a862eb88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2006b33d30cf2fd57843f3df0fb087253dd116f48a4d807c31260ce7508b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badcec1b25a9d088fe7e563366ee7568adcabfe9c29a536db19fe3119b10f229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7f50a49e995c2647a19bd3dedd3ca85f1d7d0279df106c153af39641af9ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e8cd87f56c4a70c698163de2d3f364420981943d389a3cc9b64401bb5fbf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e8cd87f56c4a70c698163de2d3f364420981943d389a3cc9b64401bb5fbf08e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:33Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:33 crc kubenswrapper[4730]: I0320 15:41:33.475283 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:33Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:33 crc kubenswrapper[4730]: I0320 15:41:33.495020 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://055b0251e049abda93d63985b27520b79b2596443df06427dd990fc974b52a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:33Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:33 crc kubenswrapper[4730]: I0320 15:41:33.512934 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:33Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.226855 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj97f_c4b4e0e8-af33-491e-b1d1-31079d90c656/ovnkube-controller/3.log" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.232358 4730 scope.go:117] "RemoveContainer" containerID="7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed" Mar 20 15:41:34 crc kubenswrapper[4730]: E0320 15:41:34.232646 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qj97f_openshift-ovn-kubernetes(c4b4e0e8-af33-491e-b1d1-31079d90c656)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.250989 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e20a286d6affcba7ffa950ef5386e7f439c9a02381cb8b7d3bc51ad9c4f343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:34Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.266184 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35fa36f52355ad1fd24bd332fc5b0a5ff35ac2be71dd16b53449d9f47e48e8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:34Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.285322 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34c742c6da6c6d35f815901234a0c12c9628d22fa83e511bddc78eae4373cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:34Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.320387 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:34Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.342441 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:34Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.359941 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:34Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.379553 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:34Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.420386 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12ba423ea0fecce8b2416cc8f75f3323980aae80a20ff26bd2f9a6c4cd464812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:41:27Z\\\",\\\"message\\\":\\\"2026-03-20T15:40:41+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5b4f3207-7d26-40ce-9cb0-76c6f07e26b9\\\\n2026-03-20T15:40:41+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5b4f3207-7d26-40ce-9cb0-76c6f07e26b9 to /host/opt/cni/bin/\\\\n2026-03-20T15:40:42Z [verbose] multus-daemon started\\\\n2026-03-20T15:40:42Z [verbose] Readiness Indicator file check\\\\n2026-03-20T15:41:27Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:34Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.438993 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:34Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.458598 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf1c50b-d896-46ac-8c1c-2368a862eb88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2006b33d30cf2fd57843f3df0fb087253dd116f48a4d807c31260ce7508b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badcec1b25a9d088fe7e563366ee7568adcabfe9c29a536db19fe3119b10f229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7f50a49e995c2647a19bd3dedd3ca85f1d7d0279df106c153af39641af9ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e8cd87f56c4a70c698163de2d3f364420981943d389a3cc9b64401bb5fbf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e8cd87f56c4a70c698163de2d3f364420981943d389a3cc9b64401bb5fbf08e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:34Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.470345 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:34Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.480635 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://055b0251e049abda93d63985b27520b79b2596443df06427dd990fc974b52a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:34Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.491708 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:34Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.501444 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eed0424-45fd-4b1e-8b59-d041af7fb08f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://620070760ce503ee2102ce0880913637feb032124892ce1a1e2060939f38e050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b34522460ebd4556ce4291e5c5132788387cf45b0be3b9535af9262948b71ac\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:27Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 15:39:03.602145 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 15:39:03.606021 1 observer_polling.go:159] Starting file observer\\\\nI0320 15:39:03.640071 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 15:39:03.644835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 15:39:27.437935 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 15:39:27.438079 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:27Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aee2dcf43ecf6df4a1615aa6e468921053ccb529d3c6dbc2c2ad641e264e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99093fe46696a888b221d24d1b42226d0ff16bab6b3fb2a718d055cf97066a69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://899dbd6715433cfe5141851019e164daea952552c26706648245fd6319168685\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:34Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.511595 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f8a35dfa17acf7f7051d13b20cab55fe91645c8fa2773fed67baddae164b586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:34Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.522501 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:34Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.532683 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:34 crc kubenswrapper[4730]: E0320 15:41:34.532903 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.532718 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:34 crc kubenswrapper[4730]: E0320 15:41:34.533115 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.532683 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:34 crc kubenswrapper[4730]: E0320 15:41:34.533315 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.532718 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:34 crc kubenswrapper[4730]: E0320 15:41:34.533518 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.538636 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:41:32Z\\\",\\\"message\\\":\\\"44],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0320 15:41:32.432008 7239 lb_config.go:1031] Cluster endpoints for openshift-ingress-operator/metrics for network=default are: map[]\\\\nF0320 15:41:32.432028 7239 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:32Z is after 2025-08-24T17:21:41Z]\\\\nI0320 15:41:32.432031 7239 services_controller.go:443] Built service openshift-ingress-operator/metrics LB cluster-wide confi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:41:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qj97f_openshift-ovn-kubernetes(c4b4e0e8-af33-491e-b1d1-31079d90c656)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:34Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.550534 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.550723 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.550796 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.550868 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.550931 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:34Z","lastTransitionTime":"2026-03-20T15:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.554419 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:34Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.565694 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:34Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:34 crc kubenswrapper[4730]: E0320 15:41:34.569672 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:34Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.573082 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.573105 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.573114 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.573128 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.573137 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:34Z","lastTransitionTime":"2026-03-20T15:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:34 crc kubenswrapper[4730]: E0320 15:41:34.585269 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:34Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.588349 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.588384 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.588398 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.588414 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.588426 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:34Z","lastTransitionTime":"2026-03-20T15:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:34 crc kubenswrapper[4730]: E0320 15:41:34.602022 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:34Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.605002 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.605038 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.605062 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.605076 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.605086 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:34Z","lastTransitionTime":"2026-03-20T15:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:34 crc kubenswrapper[4730]: E0320 15:41:34.617355 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:34Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.620205 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.620228 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.620236 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.620259 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:34 crc kubenswrapper[4730]: I0320 15:41:34.620268 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:34Z","lastTransitionTime":"2026-03-20T15:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:34 crc kubenswrapper[4730]: E0320 15:41:34.631500 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:34Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:34 crc kubenswrapper[4730]: E0320 15:41:34.631604 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 15:41:36 crc kubenswrapper[4730]: I0320 15:41:36.532179 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:36 crc kubenswrapper[4730]: I0320 15:41:36.532226 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:36 crc kubenswrapper[4730]: I0320 15:41:36.532200 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:36 crc kubenswrapper[4730]: I0320 15:41:36.532177 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:36 crc kubenswrapper[4730]: E0320 15:41:36.532390 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:36 crc kubenswrapper[4730]: E0320 15:41:36.532562 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:36 crc kubenswrapper[4730]: E0320 15:41:36.532687 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:36 crc kubenswrapper[4730]: E0320 15:41:36.532955 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:36 crc kubenswrapper[4730]: E0320 15:41:36.640598 4730 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 15:41:38 crc kubenswrapper[4730]: I0320 15:41:38.533054 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:38 crc kubenswrapper[4730]: E0320 15:41:38.533310 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:38 crc kubenswrapper[4730]: I0320 15:41:38.533513 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:38 crc kubenswrapper[4730]: I0320 15:41:38.533631 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:38 crc kubenswrapper[4730]: E0320 15:41:38.533673 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:38 crc kubenswrapper[4730]: I0320 15:41:38.533744 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:38 crc kubenswrapper[4730]: E0320 15:41:38.533865 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:38 crc kubenswrapper[4730]: E0320 15:41:38.534015 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:40 crc kubenswrapper[4730]: I0320 15:41:40.532948 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:40 crc kubenswrapper[4730]: I0320 15:41:40.533026 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:40 crc kubenswrapper[4730]: I0320 15:41:40.533131 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:40 crc kubenswrapper[4730]: I0320 15:41:40.533191 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:40 crc kubenswrapper[4730]: E0320 15:41:40.533283 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:40 crc kubenswrapper[4730]: E0320 15:41:40.533399 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:40 crc kubenswrapper[4730]: E0320 15:41:40.533496 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:40 crc kubenswrapper[4730]: E0320 15:41:40.533557 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:41 crc kubenswrapper[4730]: I0320 15:41:41.565475 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:41 crc kubenswrapper[4730]: I0320 15:41:41.590753 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:41 crc kubenswrapper[4730]: I0320 15:41:41.610747 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:41 crc kubenswrapper[4730]: I0320 15:41:41.633707 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:41 crc kubenswrapper[4730]: E0320 15:41:41.641474 4730 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 15:41:41 crc kubenswrapper[4730]: I0320 15:41:41.660187 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e20a286d6affcba7ffa950ef5386e7f439c9a02381cb8b7d3bc51ad9c4f343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:41 crc kubenswrapper[4730]: I0320 15:41:41.677688 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35fa36f52355ad1fd24bd332fc5b0a5ff35ac2be71dd16b53449d9f47e48e8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:41 crc kubenswrapper[4730]: I0320 15:41:41.697473 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34c742c6da6c6d35f815901234a0c12c9628d22fa83e511bddc78eae4373cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:41 crc kubenswrapper[4730]: I0320 15:41:41.709830 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf1c50b-d896-46ac-8c1c-2368a862eb88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2006b33d30cf2fd57843f3df0fb087253dd116f48a4d807c31260ce7508b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badcec1b25a9d088fe7e563366ee7568adcabfe9c29a536db19fe3119b10f229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7f50a49e995c2647a19bd3dedd3ca85f1d7d0279df106c153af39641af9ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e8cd87f56c4a70c698163de2d3f364420981943d389a3cc9b64401bb5fbf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e8cd87f56c4a70c698163de2d3f364420981943d389a3cc9b64401bb5fbf08e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:41 crc kubenswrapper[4730]: I0320 15:41:41.720617 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:41 crc kubenswrapper[4730]: I0320 15:41:41.733184 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://055b0251e049abda93d63985b27520b79b2596443df06427dd990fc974b52a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:41 crc kubenswrapper[4730]: I0320 15:41:41.744453 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:41 crc kubenswrapper[4730]: I0320 15:41:41.759307 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12ba423ea0fecce8b2416cc8f75f3323980aae80a20ff26bd2f9a6c4cd464812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:41:27Z\\\",\\\"message\\\":\\\"2026-03-20T15:40:41+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5b4f3207-7d26-40ce-9cb0-76c6f07e26b9\\\\n2026-03-20T15:40:41+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5b4f3207-7d26-40ce-9cb0-76c6f07e26b9 to /host/opt/cni/bin/\\\\n2026-03-20T15:40:42Z [verbose] multus-daemon started\\\\n2026-03-20T15:40:42Z [verbose] Readiness Indicator file check\\\\n2026-03-20T15:41:27Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:41 crc kubenswrapper[4730]: I0320 15:41:41.770751 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:41 crc kubenswrapper[4730]: I0320 15:41:41.783896 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eed0424-45fd-4b1e-8b59-d041af7fb08f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://620070760ce503ee2102ce0880913637feb032124892ce1a1e2060939f38e050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b34522460ebd4556ce4291e5c5132788387cf45b0be3b9535af9262948b71ac\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:27Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 15:39:03.602145 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 15:39:03.606021 1 observer_polling.go:159] Starting file observer\\\\nI0320 15:39:03.640071 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 15:39:03.644835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 15:39:27.437935 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 15:39:27.438079 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:27Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aee2dcf43ecf6df4a1615aa6e468921053ccb529d3c6dbc2c2ad641e264e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99093fe46696a888b221d24d1b42226d0ff16bab6b3fb2a718d055cf97066a69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://899dbd6715433cfe5141851019e164daea952552c26706648245fd6319168685\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:41 crc kubenswrapper[4730]: I0320 15:41:41.794578 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f8a35dfa17acf7f7051d13b20cab55fe91645c8fa2773fed67baddae164b586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:41 crc kubenswrapper[4730]: I0320 15:41:41.805523 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:41 crc kubenswrapper[4730]: I0320 15:41:41.821788 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:41:32Z\\\",\\\"message\\\":\\\"44],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0320 15:41:32.432008 7239 lb_config.go:1031] Cluster endpoints for openshift-ingress-operator/metrics for network=default are: map[]\\\\nF0320 15:41:32.432028 7239 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:32Z is after 2025-08-24T17:21:41Z]\\\\nI0320 15:41:32.432031 7239 services_controller.go:443] Built service openshift-ingress-operator/metrics LB cluster-wide confi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:41:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qj97f_openshift-ovn-kubernetes(c4b4e0e8-af33-491e-b1d1-31079d90c656)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:41 crc kubenswrapper[4730]: I0320 15:41:41.834278 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:41 crc kubenswrapper[4730]: I0320 15:41:41.844013 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:41Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:42 crc kubenswrapper[4730]: I0320 15:41:42.532148 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:42 crc kubenswrapper[4730]: I0320 15:41:42.532213 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:42 crc kubenswrapper[4730]: I0320 15:41:42.532293 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:42 crc kubenswrapper[4730]: I0320 15:41:42.532347 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:42 crc kubenswrapper[4730]: E0320 15:41:42.533530 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:42 crc kubenswrapper[4730]: E0320 15:41:42.533682 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:42 crc kubenswrapper[4730]: E0320 15:41:42.533091 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:42 crc kubenswrapper[4730]: E0320 15:41:42.533705 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:44 crc kubenswrapper[4730]: I0320 15:41:44.533130 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:44 crc kubenswrapper[4730]: I0320 15:41:44.533189 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:44 crc kubenswrapper[4730]: I0320 15:41:44.533136 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:44 crc kubenswrapper[4730]: E0320 15:41:44.533337 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:44 crc kubenswrapper[4730]: E0320 15:41:44.533444 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:44 crc kubenswrapper[4730]: E0320 15:41:44.533550 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:44 crc kubenswrapper[4730]: I0320 15:41:44.534539 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:44 crc kubenswrapper[4730]: E0320 15:41:44.534762 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:44 crc kubenswrapper[4730]: I0320 15:41:44.960310 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:44 crc kubenswrapper[4730]: I0320 15:41:44.960367 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:44 crc kubenswrapper[4730]: I0320 15:41:44.960386 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:44 crc kubenswrapper[4730]: I0320 15:41:44.960408 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:44 crc kubenswrapper[4730]: I0320 15:41:44.960426 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:44Z","lastTransitionTime":"2026-03-20T15:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:44 crc kubenswrapper[4730]: E0320 15:41:44.976853 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:44Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:44 crc kubenswrapper[4730]: I0320 15:41:44.982169 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:44 crc kubenswrapper[4730]: I0320 15:41:44.982223 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:44 crc kubenswrapper[4730]: I0320 15:41:44.982276 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:44 crc kubenswrapper[4730]: I0320 15:41:44.982309 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:44 crc kubenswrapper[4730]: I0320 15:41:44.982331 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:44Z","lastTransitionTime":"2026-03-20T15:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:45 crc kubenswrapper[4730]: E0320 15:41:45.001953 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:44Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:45 crc kubenswrapper[4730]: I0320 15:41:45.005583 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:45 crc kubenswrapper[4730]: I0320 15:41:45.005636 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:45 crc kubenswrapper[4730]: I0320 15:41:45.005650 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:45 crc kubenswrapper[4730]: I0320 15:41:45.005676 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:45 crc kubenswrapper[4730]: I0320 15:41:45.005696 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:45Z","lastTransitionTime":"2026-03-20T15:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:45 crc kubenswrapper[4730]: E0320 15:41:45.019683 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:45Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:45 crc kubenswrapper[4730]: I0320 15:41:45.023288 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:45 crc kubenswrapper[4730]: I0320 15:41:45.023354 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:45 crc kubenswrapper[4730]: I0320 15:41:45.023370 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:45 crc kubenswrapper[4730]: I0320 15:41:45.023388 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:45 crc kubenswrapper[4730]: I0320 15:41:45.023401 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:45Z","lastTransitionTime":"2026-03-20T15:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:45 crc kubenswrapper[4730]: E0320 15:41:45.038904 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:45Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:45 crc kubenswrapper[4730]: I0320 15:41:45.041794 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:45 crc kubenswrapper[4730]: I0320 15:41:45.041830 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:45 crc kubenswrapper[4730]: I0320 15:41:45.041843 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:45 crc kubenswrapper[4730]: I0320 15:41:45.041861 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:45 crc kubenswrapper[4730]: I0320 15:41:45.041872 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:45Z","lastTransitionTime":"2026-03-20T15:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:45 crc kubenswrapper[4730]: E0320 15:41:45.057594 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:41:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"666d62d4-aa52-41cc-be79-8c9a068e7752\\\",\\\"systemUUID\\\":\\\"dfe7d645-fe91-432e-8360-ef4633bfea29\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:45Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:45 crc kubenswrapper[4730]: E0320 15:41:45.057721 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 15:41:46 crc kubenswrapper[4730]: I0320 15:41:46.532086 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:46 crc kubenswrapper[4730]: I0320 15:41:46.532141 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:46 crc kubenswrapper[4730]: E0320 15:41:46.532969 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:46 crc kubenswrapper[4730]: I0320 15:41:46.532311 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:46 crc kubenswrapper[4730]: I0320 15:41:46.532169 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:46 crc kubenswrapper[4730]: E0320 15:41:46.533107 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:46 crc kubenswrapper[4730]: E0320 15:41:46.533272 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:46 crc kubenswrapper[4730]: E0320 15:41:46.533364 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:46 crc kubenswrapper[4730]: E0320 15:41:46.642810 4730 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 15:41:47 crc kubenswrapper[4730]: I0320 15:41:47.532901 4730 scope.go:117] "RemoveContainer" containerID="7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed" Mar 20 15:41:47 crc kubenswrapper[4730]: E0320 15:41:47.533685 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qj97f_openshift-ovn-kubernetes(c4b4e0e8-af33-491e-b1d1-31079d90c656)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" Mar 20 15:41:48 crc kubenswrapper[4730]: I0320 15:41:48.532102 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:48 crc kubenswrapper[4730]: I0320 15:41:48.532170 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:48 crc kubenswrapper[4730]: E0320 15:41:48.532468 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:48 crc kubenswrapper[4730]: I0320 15:41:48.532505 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:48 crc kubenswrapper[4730]: I0320 15:41:48.532561 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:48 crc kubenswrapper[4730]: E0320 15:41:48.532738 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:48 crc kubenswrapper[4730]: E0320 15:41:48.532943 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:48 crc kubenswrapper[4730]: E0320 15:41:48.532999 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:50 crc kubenswrapper[4730]: I0320 15:41:50.532668 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:50 crc kubenswrapper[4730]: I0320 15:41:50.532735 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:50 crc kubenswrapper[4730]: E0320 15:41:50.532800 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:50 crc kubenswrapper[4730]: I0320 15:41:50.532841 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:50 crc kubenswrapper[4730]: I0320 15:41:50.532926 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:50 crc kubenswrapper[4730]: E0320 15:41:50.533028 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:50 crc kubenswrapper[4730]: E0320 15:41:50.533288 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:50 crc kubenswrapper[4730]: E0320 15:41:50.533416 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:51 crc kubenswrapper[4730]: I0320 15:41:51.560672 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"708c4e4c-a9ad-466a-a3d0-a99ae3399934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://631f7e03c5476d53c6d6a3a3bf6a1d45ac40443386f072fc7c01129e9eaf7619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d029581dfd248d1298c93e18a5352686b5e89fb730ede58a29400c62deb2bc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a81d62ce9d72579cc4dd4778d22cf442919e08c7d83209fde12edc8a338684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b365c72e5e05c29295ba289a5bf795353c5221607dadd2ab8b4f642272d99fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfacdb41d11bf0b03bede308c267faddfd0241d99a4cbda2603c6539bc3a433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452bb8a7271d9ab590cf3dadd911d28ca0a8e11759db651f023858cf8089cba2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://401768d00b76ca556c6946881677a30c5e8dc1a0e680743b86dbe11d3ef0e61c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2302456cbdefcc20adf9d01130c9726f807b89a7a39d032d86a1f6f3901c506\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:51 crc kubenswrapper[4730]: I0320 15:41:51.577597 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c3fe2f-3c67-4dee-becb-3ecfe2758384\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 15:39:59.021893 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 15:39:59.022050 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 15:39:59.023119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3311534806/tls.crt::/tmp/serving-cert-3311534806/tls.key\\\\\\\"\\\\nI0320 15:39:59.307640 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 15:39:59.310126 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 15:39:59.310145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 15:39:59.310173 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 15:39:59.310181 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 15:39:59.315103 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 15:39:59.315287 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 15:39:59.315111 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 15:39:59.315373 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 15:39:59.315499 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 15:39:59.315558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 15:39:59.315615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 15:39:59.315659 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 15:39:59.316192 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:51 crc kubenswrapper[4730]: I0320 15:41:51.590919 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25d8b0d-e1be-41c2-ada3-49aaecfe3bc0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1242122dff682493181de927aefeb2ac4e5274621d865ee89ccec2e60e86469b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1a8117d479604ca09fc23e043f373aa3acbffb209d9fbfae95f2eb907424712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d4xpw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p47zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:51 crc kubenswrapper[4730]: I0320 15:41:51.606312 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e647990de11fdbff5c61eb5f2a4f36ee1c2562141d84795cf0d7bbc1155cac28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://188974c8e34d453f129d95f8a3ee36ff86b44b37a5388a0643611206f15d57c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:51 crc kubenswrapper[4730]: I0320 15:41:51.621174 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fcd3db3-55f1-4c23-8fa9-78844495cea3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e20a286d6affcba7ffa950ef5386e7f439c9a02381cb8b7d3bc51ad9c4f343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzk8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5qvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:51 crc kubenswrapper[4730]: I0320 15:41:51.636571 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-69fnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"102cb977-7291-453e-9282-20572071afee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35fa36f52355ad1fd24bd332fc5b0a5ff35ac2be71dd16b53449d9f47e48e8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plthx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-69fnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:51 crc kubenswrapper[4730]: E0320 15:41:51.643586 4730 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 15:41:51 crc kubenswrapper[4730]: I0320 15:41:51.653532 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-49hht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbb015c0-3a11-48bf-a59f-22bc03ca2fb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c34c742c6da6c6d35f815901234a0c12c9628d22fa83e511bddc78eae4373cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458c14e265e86cad40fc02fc28e4e3b9210fbae7c6fa7d26a0018cef7bf02b85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7567c6606959a0875ab14452050996da4ff86fcfecfd82a6c2ba826136a69a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc1f1bf348c9c58382cf7ce0065613fde5d4b11b2ccc480c6ee8c40adf70fe34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://535e0a48f9967f05eb75c2e5518b11385cba269c5e3d50f13b26e0ef8209c134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d05aa3e72971a8965d62f6cdae8245fd372b5d8a9df8e5a524b34326ad9f91a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfa0e3e3af5612de41c905d807532a50887378171356a3dade8aaf3d29fe9bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qtg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-49hht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:51 crc kubenswrapper[4730]: I0320 15:41:51.669463 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf1c50b-d896-46ac-8c1c-2368a862eb88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2006b33d30cf2fd57843f3df0fb087253dd116f48a4d807c31260ce7508b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badcec1b25a9d088fe7e563366ee7568adcabfe9c29a536db19fe3119b10f229\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7f50a49e995c2647a19bd3dedd3ca85f1d7d0279df106c153af39641af9ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e8cd87f56c4a70c698163de2d3f364420981943d389a3cc9b64401bb5fbf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e8cd87f56c4a70c698163de2d3f364420981943d389a3cc9b64401bb5fbf08e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:51 crc kubenswrapper[4730]: I0320 15:41:51.684391 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07c87d31-bb33-4cbb-8786-b42cee6340a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4298d984334a4955a24ac85a16284d8ffe1e4f3033c328be6e66f59d3b17f1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35485648d66e1e884978d1ea90e47c514192f3fa8382f6c74eb48ec2a587ec6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:51 crc kubenswrapper[4730]: I0320 15:41:51.696627 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://055b0251e049abda93d63985b27520b79b2596443df06427dd990fc974b52a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:51 crc kubenswrapper[4730]: I0320 15:41:51.710210 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:51 crc kubenswrapper[4730]: I0320 15:41:51.729301 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f97b1f1-1fad-44ec-8253-17dd6a5eee54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12ba423ea0fecce8b2416cc8f75f3323980aae80a20ff26bd2f9a6c4cd464812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:41:27Z\\\",\\\"message\\\":\\\"2026-03-20T15:40:41+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5b4f3207-7d26-40ce-9cb0-76c6f07e26b9\\\\n2026-03-20T15:40:41+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5b4f3207-7d26-40ce-9cb0-76c6f07e26b9 to /host/opt/cni/bin/\\\\n2026-03-20T15:40:42Z [verbose] multus-daemon started\\\\n2026-03-20T15:40:42Z [verbose] Readiness Indicator file check\\\\n2026-03-20T15:41:27Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvthz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6r2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:51 crc kubenswrapper[4730]: I0320 15:41:51.746481 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2prfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2prfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:51 crc kubenswrapper[4730]: I0320 15:41:51.766167 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eed0424-45fd-4b1e-8b59-d041af7fb08f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://620070760ce503ee2102ce0880913637feb032124892ce1a1e2060939f38e050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b34522460ebd4556ce4291e5c5132788387cf45b0be3b9535af9262948b71ac\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T15:39:27Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 15:39:03.602145 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 15:39:03.606021 1 observer_polling.go:159] Starting file observer\\\\nI0320 15:39:03.640071 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 15:39:03.644835 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 15:39:27.437935 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 15:39:27.438079 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:39:27Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aee2dcf43ecf6df4a1615aa6e468921053ccb529d3c6dbc2c2ad641e264e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99093fe46696a888b221d24d1b42226d0ff16bab6b3fb2a718d055cf97066a69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://899dbd6715433cfe5141851019e164daea952552c26706648245fd6319168685\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:39:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:51 crc kubenswrapper[4730]: I0320 15:41:51.783175 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f8a35dfa17acf7f7051d13b20cab55fe91645c8fa2773fed67baddae164b586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:51 crc kubenswrapper[4730]: I0320 15:41:51.799450 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:51 crc kubenswrapper[4730]: I0320 15:41:51.817962 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4b4e0e8-af33-491e-b1d1-31079d90c656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T15:41:32Z\\\",\\\"message\\\":\\\"44],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0320 15:41:32.432008 7239 lb_config.go:1031] Cluster endpoints for openshift-ingress-operator/metrics for network=default are: map[]\\\\nF0320 15:41:32.432028 7239 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:32Z is after 2025-08-24T17:21:41Z]\\\\nI0320 15:41:32.432031 7239 services_controller.go:443] Built service openshift-ingress-operator/metrics LB cluster-wide confi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T15:41:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qj97f_openshift-ovn-kubernetes(c4b4e0e8-af33-491e-b1d1-31079d90c656)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T15:40:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T15:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qj97f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:51 crc kubenswrapper[4730]: I0320 15:41:51.835040 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:51 crc kubenswrapper[4730]: I0320 15:41:51.846856 4730 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n4w74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ee8d55f-90bd-4484-8455-933de455efea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T15:40:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2070438cbc1bdd8e3c865f5aa62537649598e0f51effc9936b9cb1a630b3651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T15:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fvg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T15:40:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n4w74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T15:41:51Z is after 2025-08-24T17:21:41Z" Mar 20 15:41:52 crc kubenswrapper[4730]: I0320 15:41:52.532430 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:52 crc kubenswrapper[4730]: E0320 15:41:52.532562 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:52 crc kubenswrapper[4730]: I0320 15:41:52.532631 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:52 crc kubenswrapper[4730]: I0320 15:41:52.532637 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:52 crc kubenswrapper[4730]: E0320 15:41:52.532925 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:52 crc kubenswrapper[4730]: I0320 15:41:52.532637 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:52 crc kubenswrapper[4730]: E0320 15:41:52.532769 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:52 crc kubenswrapper[4730]: E0320 15:41:52.533008 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:54 crc kubenswrapper[4730]: I0320 15:41:54.532211 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:54 crc kubenswrapper[4730]: I0320 15:41:54.532220 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:54 crc kubenswrapper[4730]: E0320 15:41:54.533161 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:54 crc kubenswrapper[4730]: I0320 15:41:54.532327 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:54 crc kubenswrapper[4730]: I0320 15:41:54.532277 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:54 crc kubenswrapper[4730]: E0320 15:41:54.533317 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:54 crc kubenswrapper[4730]: E0320 15:41:54.533410 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:54 crc kubenswrapper[4730]: E0320 15:41:54.533471 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.201870 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.202147 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.202219 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.202324 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.202399 4730 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T15:41:55Z","lastTransitionTime":"2026-03-20T15:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.259146 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfvcj"] Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.259640 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfvcj" Mar 20 15:41:55 crc kubenswrapper[4730]: W0320 15:41:55.262328 4730 reflector.go:561] object-"openshift-cluster-version"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-version": no relationship found between node 'crc' and this object Mar 20 15:41:55 crc kubenswrapper[4730]: E0320 15:41:55.262396 4730 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-version\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.262804 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.262847 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.262924 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.309803 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=49.309784251 podStartE2EDuration="49.309784251s" podCreationTimestamp="2026-03-20 15:41:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:41:55.290316332 +0000 UTC m=+174.503687711" watchObservedRunningTime="2026-03-20 15:41:55.309784251 +0000 UTC m=+174.523155630" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.366393 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30a16a52-ae94-449a-ba45-a98829e0a60d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vfvcj\" (UID: \"30a16a52-ae94-449a-ba45-a98829e0a60d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfvcj" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.366482 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/30a16a52-ae94-449a-ba45-a98829e0a60d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vfvcj\" (UID: \"30a16a52-ae94-449a-ba45-a98829e0a60d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfvcj" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.366539 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30a16a52-ae94-449a-ba45-a98829e0a60d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vfvcj\" (UID: \"30a16a52-ae94-449a-ba45-a98829e0a60d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfvcj" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.366573 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/30a16a52-ae94-449a-ba45-a98829e0a60d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vfvcj\" (UID: \"30a16a52-ae94-449a-ba45-a98829e0a60d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfvcj" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.366637 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/30a16a52-ae94-449a-ba45-a98829e0a60d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vfvcj\" (UID: \"30a16a52-ae94-449a-ba45-a98829e0a60d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfvcj" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.393912 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-n4w74" podStartSLOduration=123.393894332 podStartE2EDuration="2m3.393894332s" podCreationTimestamp="2026-03-20 15:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:41:55.383003513 +0000 UTC m=+174.596374882" watchObservedRunningTime="2026-03-20 15:41:55.393894332 +0000 UTC m=+174.607265691" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.414994 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=75.414976789 podStartE2EDuration="1m15.414976789s" podCreationTimestamp="2026-03-20 15:40:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:41:55.414581967 +0000 UTC m=+174.627953336" watchObservedRunningTime="2026-03-20 15:41:55.414976789 +0000 UTC m=+174.628348158" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.429879 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=101.429860779 podStartE2EDuration="1m41.429860779s" podCreationTimestamp="2026-03-20 15:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:41:55.429399305 +0000 UTC m=+174.642770684" watchObservedRunningTime="2026-03-20 15:41:55.429860779 +0000 UTC m=+174.643232148" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.453705 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p47zh" podStartSLOduration=122.453684028 podStartE2EDuration="2m2.453684028s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:41:55.441447229 +0000 UTC m=+174.654818588" watchObservedRunningTime="2026-03-20 15:41:55.453684028 +0000 UTC m=+174.667055397" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.467468 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/30a16a52-ae94-449a-ba45-a98829e0a60d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vfvcj\" (UID: \"30a16a52-ae94-449a-ba45-a98829e0a60d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfvcj" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.467549 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30a16a52-ae94-449a-ba45-a98829e0a60d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vfvcj\" (UID: \"30a16a52-ae94-449a-ba45-a98829e0a60d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfvcj" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.467569 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/30a16a52-ae94-449a-ba45-a98829e0a60d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vfvcj\" (UID: \"30a16a52-ae94-449a-ba45-a98829e0a60d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfvcj" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.467594 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30a16a52-ae94-449a-ba45-a98829e0a60d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vfvcj\" (UID: \"30a16a52-ae94-449a-ba45-a98829e0a60d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfvcj" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.467613 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/30a16a52-ae94-449a-ba45-a98829e0a60d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vfvcj\" (UID: \"30a16a52-ae94-449a-ba45-a98829e0a60d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfvcj" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.467733 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/30a16a52-ae94-449a-ba45-a98829e0a60d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vfvcj\" (UID: \"30a16a52-ae94-449a-ba45-a98829e0a60d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfvcj" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.467969 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/30a16a52-ae94-449a-ba45-a98829e0a60d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vfvcj\" (UID: \"30a16a52-ae94-449a-ba45-a98829e0a60d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfvcj" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.474927 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podStartSLOduration=123.47490991 podStartE2EDuration="2m3.47490991s" podCreationTimestamp="2026-03-20 15:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:41:55.465068082 +0000 UTC m=+174.678439461" watchObservedRunningTime="2026-03-20 15:41:55.47490991 +0000 UTC m=+174.688281279" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.475573 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-69fnw" podStartSLOduration=123.475565299 podStartE2EDuration="2m3.475565299s" podCreationTimestamp="2026-03-20 15:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:41:55.474569639 +0000 UTC m=+174.687941008" watchObservedRunningTime="2026-03-20 15:41:55.475565299 +0000 UTC m=+174.688936668" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.482408 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30a16a52-ae94-449a-ba45-a98829e0a60d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vfvcj\" (UID: \"30a16a52-ae94-449a-ba45-a98829e0a60d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfvcj" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.486455 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30a16a52-ae94-449a-ba45-a98829e0a60d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vfvcj\" (UID: \"30a16a52-ae94-449a-ba45-a98829e0a60d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfvcj" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.495339 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-49hht" podStartSLOduration=123.495322386 podStartE2EDuration="2m3.495322386s" podCreationTimestamp="2026-03-20 15:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:41:55.494914404 +0000 UTC m=+174.708285773" watchObservedRunningTime="2026-03-20 15:41:55.495322386 +0000 UTC m=+174.708693755" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.508120 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=40.508098922 podStartE2EDuration="40.508098922s" podCreationTimestamp="2026-03-20 15:41:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:41:55.50734948 +0000 UTC m=+174.720720859" watchObservedRunningTime="2026-03-20 15:41:55.508098922 +0000 UTC m=+174.721470301" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.534542 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=93.534521141 podStartE2EDuration="1m33.534521141s" podCreationTimestamp="2026-03-20 15:40:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:41:55.523319382 +0000 UTC m=+174.736690751" watchObservedRunningTime="2026-03-20 15:41:55.534521141 +0000 UTC m=+174.747892520" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.572434 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-6r2kn" podStartSLOduration=123.572418986 podStartE2EDuration="2m3.572418986s" podCreationTimestamp="2026-03-20 15:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:41:55.562344101 +0000 UTC m=+174.775715470" watchObservedRunningTime="2026-03-20 15:41:55.572418986 +0000 UTC m=+174.785790355" Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.581729 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 20 15:41:55 crc kubenswrapper[4730]: I0320 15:41:55.588516 4730 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 20 15:41:56 crc kubenswrapper[4730]: E0320 15:41:56.468073 4730 configmap.go:193] Couldn't get configMap openshift-cluster-version/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 20 15:41:56 crc kubenswrapper[4730]: E0320 15:41:56.468193 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/30a16a52-ae94-449a-ba45-a98829e0a60d-service-ca podName:30a16a52-ae94-449a-ba45-a98829e0a60d nodeName:}" failed. No retries permitted until 2026-03-20 15:41:56.968167017 +0000 UTC m=+176.181538386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/30a16a52-ae94-449a-ba45-a98829e0a60d-service-ca") pod "cluster-version-operator-5c965bbfc6-vfvcj" (UID: "30a16a52-ae94-449a-ba45-a98829e0a60d") : failed to sync configmap cache: timed out waiting for the condition Mar 20 15:41:56 crc kubenswrapper[4730]: I0320 15:41:56.488880 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 20 15:41:56 crc kubenswrapper[4730]: I0320 15:41:56.532457 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:56 crc kubenswrapper[4730]: I0320 15:41:56.532485 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:56 crc kubenswrapper[4730]: E0320 15:41:56.532804 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:41:56 crc kubenswrapper[4730]: I0320 15:41:56.533161 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:56 crc kubenswrapper[4730]: E0320 15:41:56.533389 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:56 crc kubenswrapper[4730]: E0320 15:41:56.533563 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:56 crc kubenswrapper[4730]: I0320 15:41:56.534607 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:56 crc kubenswrapper[4730]: E0320 15:41:56.534790 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:56 crc kubenswrapper[4730]: E0320 15:41:56.645658 4730 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 15:41:56 crc kubenswrapper[4730]: I0320 15:41:56.984615 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/30a16a52-ae94-449a-ba45-a98829e0a60d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vfvcj\" (UID: \"30a16a52-ae94-449a-ba45-a98829e0a60d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfvcj" Mar 20 15:41:56 crc kubenswrapper[4730]: I0320 15:41:56.986476 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/30a16a52-ae94-449a-ba45-a98829e0a60d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vfvcj\" (UID: \"30a16a52-ae94-449a-ba45-a98829e0a60d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfvcj" Mar 20 15:41:57 crc kubenswrapper[4730]: I0320 15:41:57.078324 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfvcj" Mar 20 15:41:57 crc kubenswrapper[4730]: I0320 15:41:57.309321 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfvcj" event={"ID":"30a16a52-ae94-449a-ba45-a98829e0a60d","Type":"ContainerStarted","Data":"f6345dfc0fb582e6ffcc50d8ae0c8e7b2847b61888dbe556736c5310027440c1"} Mar 20 15:41:57 crc kubenswrapper[4730]: I0320 15:41:57.309391 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfvcj" event={"ID":"30a16a52-ae94-449a-ba45-a98829e0a60d","Type":"ContainerStarted","Data":"0f82b0eab87c550337070792b2fa7f0a7aae2a1e760a773dbdc47fa8aa632ca3"} Mar 20 15:41:57 crc kubenswrapper[4730]: I0320 15:41:57.323605 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vfvcj" podStartSLOduration=125.323584122 podStartE2EDuration="2m5.323584122s" podCreationTimestamp="2026-03-20 15:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:41:57.323343005 +0000 UTC m=+176.536714394" watchObservedRunningTime="2026-03-20 15:41:57.323584122 +0000 UTC m=+176.536955501" Mar 20 15:41:58 crc kubenswrapper[4730]: I0320 15:41:58.533058 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:41:58 crc kubenswrapper[4730]: I0320 15:41:58.533207 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:41:58 crc kubenswrapper[4730]: E0320 15:41:58.533345 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:41:58 crc kubenswrapper[4730]: I0320 15:41:58.533069 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:41:58 crc kubenswrapper[4730]: E0320 15:41:58.533469 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:41:58 crc kubenswrapper[4730]: E0320 15:41:58.533639 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:41:58 crc kubenswrapper[4730]: I0320 15:41:58.533731 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:41:58 crc kubenswrapper[4730]: E0320 15:41:58.533922 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:42:00 crc kubenswrapper[4730]: I0320 15:42:00.532978 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:42:00 crc kubenswrapper[4730]: I0320 15:42:00.532979 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:42:00 crc kubenswrapper[4730]: I0320 15:42:00.533034 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:42:00 crc kubenswrapper[4730]: I0320 15:42:00.533586 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:42:00 crc kubenswrapper[4730]: E0320 15:42:00.533745 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:42:00 crc kubenswrapper[4730]: E0320 15:42:00.533859 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:42:00 crc kubenswrapper[4730]: E0320 15:42:00.533964 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:42:00 crc kubenswrapper[4730]: E0320 15:42:00.534016 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:42:01 crc kubenswrapper[4730]: I0320 15:42:01.538550 4730 scope.go:117] "RemoveContainer" containerID="7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed" Mar 20 15:42:01 crc kubenswrapper[4730]: E0320 15:42:01.538700 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qj97f_openshift-ovn-kubernetes(c4b4e0e8-af33-491e-b1d1-31079d90c656)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" Mar 20 15:42:01 crc kubenswrapper[4730]: E0320 15:42:01.646993 4730 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 15:42:02 crc kubenswrapper[4730]: I0320 15:42:02.532151 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:42:02 crc kubenswrapper[4730]: I0320 15:42:02.532237 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:42:02 crc kubenswrapper[4730]: E0320 15:42:02.532972 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:42:02 crc kubenswrapper[4730]: I0320 15:42:02.532339 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:42:02 crc kubenswrapper[4730]: I0320 15:42:02.532290 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:42:02 crc kubenswrapper[4730]: E0320 15:42:02.533462 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:42:02 crc kubenswrapper[4730]: E0320 15:42:02.533585 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:42:02 crc kubenswrapper[4730]: E0320 15:42:02.533366 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:42:04 crc kubenswrapper[4730]: I0320 15:42:04.532621 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:42:04 crc kubenswrapper[4730]: I0320 15:42:04.532733 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:42:04 crc kubenswrapper[4730]: I0320 15:42:04.532910 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:42:04 crc kubenswrapper[4730]: E0320 15:42:04.532932 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:42:04 crc kubenswrapper[4730]: E0320 15:42:04.532747 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:42:04 crc kubenswrapper[4730]: E0320 15:42:04.532954 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:42:04 crc kubenswrapper[4730]: I0320 15:42:04.533004 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:42:04 crc kubenswrapper[4730]: E0320 15:42:04.533040 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:42:06 crc kubenswrapper[4730]: I0320 15:42:06.532376 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:42:06 crc kubenswrapper[4730]: I0320 15:42:06.532409 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:42:06 crc kubenswrapper[4730]: I0320 15:42:06.532436 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:42:06 crc kubenswrapper[4730]: E0320 15:42:06.532501 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:42:06 crc kubenswrapper[4730]: I0320 15:42:06.532553 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:42:06 crc kubenswrapper[4730]: E0320 15:42:06.532730 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:42:06 crc kubenswrapper[4730]: E0320 15:42:06.532916 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:42:06 crc kubenswrapper[4730]: E0320 15:42:06.532969 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:42:06 crc kubenswrapper[4730]: E0320 15:42:06.648439 4730 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 15:42:08 crc kubenswrapper[4730]: I0320 15:42:08.532779 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:42:08 crc kubenswrapper[4730]: I0320 15:42:08.532812 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:42:08 crc kubenswrapper[4730]: I0320 15:42:08.532821 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:42:08 crc kubenswrapper[4730]: E0320 15:42:08.532945 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:42:08 crc kubenswrapper[4730]: I0320 15:42:08.533040 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:42:08 crc kubenswrapper[4730]: E0320 15:42:08.533334 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:42:08 crc kubenswrapper[4730]: E0320 15:42:08.533584 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:42:08 crc kubenswrapper[4730]: E0320 15:42:08.533683 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:42:10 crc kubenswrapper[4730]: I0320 15:42:10.532996 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:42:10 crc kubenswrapper[4730]: I0320 15:42:10.533064 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:42:10 crc kubenswrapper[4730]: E0320 15:42:10.533137 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:42:10 crc kubenswrapper[4730]: I0320 15:42:10.533020 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:42:10 crc kubenswrapper[4730]: I0320 15:42:10.533219 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:42:10 crc kubenswrapper[4730]: E0320 15:42:10.533329 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:42:10 crc kubenswrapper[4730]: E0320 15:42:10.533405 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:42:10 crc kubenswrapper[4730]: E0320 15:42:10.533470 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:42:11 crc kubenswrapper[4730]: E0320 15:42:11.648980 4730 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 15:42:12 crc kubenswrapper[4730]: I0320 15:42:12.532572 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:42:12 crc kubenswrapper[4730]: I0320 15:42:12.532586 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:42:12 crc kubenswrapper[4730]: I0320 15:42:12.532644 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:42:12 crc kubenswrapper[4730]: E0320 15:42:12.532937 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:42:12 crc kubenswrapper[4730]: I0320 15:42:12.533053 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:42:12 crc kubenswrapper[4730]: E0320 15:42:12.533191 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:42:12 crc kubenswrapper[4730]: E0320 15:42:12.533326 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:42:12 crc kubenswrapper[4730]: E0320 15:42:12.533488 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:42:13 crc kubenswrapper[4730]: I0320 15:42:13.370808 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6r2kn_6f97b1f1-1fad-44ec-8253-17dd6a5eee54/kube-multus/1.log" Mar 20 15:42:13 crc kubenswrapper[4730]: I0320 15:42:13.371950 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6r2kn_6f97b1f1-1fad-44ec-8253-17dd6a5eee54/kube-multus/0.log" Mar 20 15:42:13 crc kubenswrapper[4730]: I0320 15:42:13.371991 4730 generic.go:334] "Generic (PLEG): container finished" podID="6f97b1f1-1fad-44ec-8253-17dd6a5eee54" containerID="12ba423ea0fecce8b2416cc8f75f3323980aae80a20ff26bd2f9a6c4cd464812" exitCode=1 Mar 20 15:42:13 crc kubenswrapper[4730]: I0320 15:42:13.372021 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6r2kn" event={"ID":"6f97b1f1-1fad-44ec-8253-17dd6a5eee54","Type":"ContainerDied","Data":"12ba423ea0fecce8b2416cc8f75f3323980aae80a20ff26bd2f9a6c4cd464812"} Mar 20 15:42:13 crc kubenswrapper[4730]: I0320 15:42:13.372054 4730 scope.go:117] "RemoveContainer" containerID="f7f0276b7ef4131fee3cc30530a3cb7ddf6e6be5ce73d3084a1f6f4272f9dde6" Mar 20 15:42:13 crc kubenswrapper[4730]: I0320 15:42:13.372504 4730 scope.go:117] "RemoveContainer" containerID="12ba423ea0fecce8b2416cc8f75f3323980aae80a20ff26bd2f9a6c4cd464812" Mar 20 15:42:13 crc kubenswrapper[4730]: E0320 15:42:13.372713 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-6r2kn_openshift-multus(6f97b1f1-1fad-44ec-8253-17dd6a5eee54)\"" pod="openshift-multus/multus-6r2kn" podUID="6f97b1f1-1fad-44ec-8253-17dd6a5eee54" Mar 20 15:42:14 crc kubenswrapper[4730]: I0320 15:42:14.377328 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6r2kn_6f97b1f1-1fad-44ec-8253-17dd6a5eee54/kube-multus/1.log" Mar 20 15:42:14 crc kubenswrapper[4730]: I0320 15:42:14.532933 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:42:14 crc kubenswrapper[4730]: I0320 15:42:14.533042 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:42:14 crc kubenswrapper[4730]: I0320 15:42:14.533054 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:42:14 crc kubenswrapper[4730]: I0320 15:42:14.533075 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:42:14 crc kubenswrapper[4730]: E0320 15:42:14.533639 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:42:14 crc kubenswrapper[4730]: E0320 15:42:14.533707 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:42:14 crc kubenswrapper[4730]: E0320 15:42:14.533826 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:42:14 crc kubenswrapper[4730]: E0320 15:42:14.533930 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:42:16 crc kubenswrapper[4730]: I0320 15:42:16.533139 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:42:16 crc kubenswrapper[4730]: I0320 15:42:16.533273 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:42:16 crc kubenswrapper[4730]: E0320 15:42:16.533354 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:42:16 crc kubenswrapper[4730]: I0320 15:42:16.533274 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:42:16 crc kubenswrapper[4730]: I0320 15:42:16.533685 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:42:16 crc kubenswrapper[4730]: E0320 15:42:16.533817 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:42:16 crc kubenswrapper[4730]: E0320 15:42:16.533928 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:42:16 crc kubenswrapper[4730]: E0320 15:42:16.534021 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:42:16 crc kubenswrapper[4730]: I0320 15:42:16.534306 4730 scope.go:117] "RemoveContainer" containerID="7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed" Mar 20 15:42:16 crc kubenswrapper[4730]: E0320 15:42:16.651355 4730 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 15:42:17 crc kubenswrapper[4730]: I0320 15:42:17.388965 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj97f_c4b4e0e8-af33-491e-b1d1-31079d90c656/ovnkube-controller/3.log" Mar 20 15:42:17 crc kubenswrapper[4730]: I0320 15:42:17.392045 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" event={"ID":"c4b4e0e8-af33-491e-b1d1-31079d90c656","Type":"ContainerStarted","Data":"35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971"} Mar 20 15:42:17 crc kubenswrapper[4730]: I0320 15:42:17.392483 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:42:17 crc kubenswrapper[4730]: I0320 15:42:17.433127 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" podStartSLOduration=145.433108719 podStartE2EDuration="2m25.433108719s" podCreationTimestamp="2026-03-20 15:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:17.432560952 +0000 UTC m=+196.645932341" watchObservedRunningTime="2026-03-20 15:42:17.433108719 +0000 UTC m=+196.646480088" Mar 20 15:42:17 crc kubenswrapper[4730]: I0320 15:42:17.455420 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2prfn"] Mar 20 15:42:17 crc kubenswrapper[4730]: I0320 15:42:17.455535 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:42:17 crc kubenswrapper[4730]: E0320 15:42:17.455641 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:42:18 crc kubenswrapper[4730]: I0320 15:42:18.532805 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:42:18 crc kubenswrapper[4730]: I0320 15:42:18.532808 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:42:18 crc kubenswrapper[4730]: E0320 15:42:18.533434 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:42:18 crc kubenswrapper[4730]: I0320 15:42:18.532890 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:42:18 crc kubenswrapper[4730]: E0320 15:42:18.533566 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:42:18 crc kubenswrapper[4730]: E0320 15:42:18.533735 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:42:19 crc kubenswrapper[4730]: I0320 15:42:19.533236 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:42:19 crc kubenswrapper[4730]: E0320 15:42:19.533510 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:42:20 crc kubenswrapper[4730]: I0320 15:42:20.532822 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:42:20 crc kubenswrapper[4730]: I0320 15:42:20.532853 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:42:20 crc kubenswrapper[4730]: I0320 15:42:20.532887 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:42:20 crc kubenswrapper[4730]: E0320 15:42:20.533000 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:42:20 crc kubenswrapper[4730]: E0320 15:42:20.533077 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:42:20 crc kubenswrapper[4730]: E0320 15:42:20.533154 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:42:20 crc kubenswrapper[4730]: I0320 15:42:20.539101 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:20 crc kubenswrapper[4730]: E0320 15:42:20.539399 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:44:22.539364574 +0000 UTC m=+321.752735953 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:20 crc kubenswrapper[4730]: I0320 15:42:20.640154 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:42:20 crc kubenswrapper[4730]: I0320 15:42:20.640304 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:42:20 crc kubenswrapper[4730]: I0320 15:42:20.640376 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:42:20 crc kubenswrapper[4730]: I0320 15:42:20.640425 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:42:20 crc kubenswrapper[4730]: E0320 15:42:20.640489 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 15:42:20 crc kubenswrapper[4730]: E0320 15:42:20.640533 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 15:42:20 crc kubenswrapper[4730]: E0320 15:42:20.640553 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:42:20 crc kubenswrapper[4730]: E0320 15:42:20.640551 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 15:42:20 crc kubenswrapper[4730]: E0320 15:42:20.640619 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 15:42:20 crc kubenswrapper[4730]: E0320 15:42:20.640648 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 15:44:22.640624282 +0000 UTC m=+321.853995681 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:42:20 crc kubenswrapper[4730]: E0320 15:42:20.640686 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 15:44:22.640668703 +0000 UTC m=+321.854040102 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 15:42:20 crc kubenswrapper[4730]: E0320 15:42:20.640655 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 15:42:20 crc kubenswrapper[4730]: E0320 15:42:20.640711 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:42:20 crc kubenswrapper[4730]: E0320 15:42:20.640620 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 15:42:20 crc kubenswrapper[4730]: E0320 15:42:20.640753 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 15:44:22.640742125 +0000 UTC m=+321.854113534 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 15:42:20 crc kubenswrapper[4730]: E0320 15:42:20.640846 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 15:44:22.640813967 +0000 UTC m=+321.854185407 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 15:42:20 crc kubenswrapper[4730]: I0320 15:42:20.841835 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs\") pod \"network-metrics-daemon-2prfn\" (UID: \"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\") " pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:42:20 crc kubenswrapper[4730]: E0320 15:42:20.842030 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 15:42:20 crc kubenswrapper[4730]: E0320 15:42:20.842117 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs podName:db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a nodeName:}" failed. No retries permitted until 2026-03-20 15:44:22.842100035 +0000 UTC m=+322.055471404 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs") pod "network-metrics-daemon-2prfn" (UID: "db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 15:42:21 crc kubenswrapper[4730]: I0320 15:42:21.532581 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:42:21 crc kubenswrapper[4730]: E0320 15:42:21.535149 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:42:21 crc kubenswrapper[4730]: E0320 15:42:21.652286 4730 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 15:42:22 crc kubenswrapper[4730]: I0320 15:42:22.533058 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:42:22 crc kubenswrapper[4730]: I0320 15:42:22.533243 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:42:22 crc kubenswrapper[4730]: E0320 15:42:22.533335 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:42:22 crc kubenswrapper[4730]: I0320 15:42:22.533455 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:42:22 crc kubenswrapper[4730]: E0320 15:42:22.534036 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:42:22 crc kubenswrapper[4730]: E0320 15:42:22.534325 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:42:23 crc kubenswrapper[4730]: I0320 15:42:23.532622 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:42:23 crc kubenswrapper[4730]: E0320 15:42:23.532910 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:42:24 crc kubenswrapper[4730]: I0320 15:42:24.533230 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:42:24 crc kubenswrapper[4730]: I0320 15:42:24.533287 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:42:24 crc kubenswrapper[4730]: E0320 15:42:24.533586 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:42:24 crc kubenswrapper[4730]: E0320 15:42:24.533703 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:42:24 crc kubenswrapper[4730]: I0320 15:42:24.533913 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:42:24 crc kubenswrapper[4730]: E0320 15:42:24.534075 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:42:25 crc kubenswrapper[4730]: I0320 15:42:25.533342 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:42:25 crc kubenswrapper[4730]: E0320 15:42:25.533551 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:42:25 crc kubenswrapper[4730]: I0320 15:42:25.533757 4730 scope.go:117] "RemoveContainer" containerID="12ba423ea0fecce8b2416cc8f75f3323980aae80a20ff26bd2f9a6c4cd464812" Mar 20 15:42:26 crc kubenswrapper[4730]: I0320 15:42:26.426956 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6r2kn_6f97b1f1-1fad-44ec-8253-17dd6a5eee54/kube-multus/1.log" Mar 20 15:42:26 crc kubenswrapper[4730]: I0320 15:42:26.427340 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6r2kn" event={"ID":"6f97b1f1-1fad-44ec-8253-17dd6a5eee54","Type":"ContainerStarted","Data":"b07ba8437e9756f6cb976900c9db574ebb08c12f74c7cd2c86009c95fccf5b7e"} Mar 20 15:42:26 crc kubenswrapper[4730]: I0320 15:42:26.532795 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:42:26 crc kubenswrapper[4730]: I0320 15:42:26.532864 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:42:26 crc kubenswrapper[4730]: I0320 15:42:26.532868 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:42:26 crc kubenswrapper[4730]: E0320 15:42:26.533136 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:42:26 crc kubenswrapper[4730]: E0320 15:42:26.533285 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:42:26 crc kubenswrapper[4730]: E0320 15:42:26.533451 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:42:26 crc kubenswrapper[4730]: E0320 15:42:26.653889 4730 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 15:42:27 crc kubenswrapper[4730]: I0320 15:42:27.532742 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:42:27 crc kubenswrapper[4730]: E0320 15:42:27.532885 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:42:28 crc kubenswrapper[4730]: I0320 15:42:28.532140 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:42:28 crc kubenswrapper[4730]: I0320 15:42:28.532263 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:42:28 crc kubenswrapper[4730]: E0320 15:42:28.532390 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:42:28 crc kubenswrapper[4730]: E0320 15:42:28.532606 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:42:28 crc kubenswrapper[4730]: I0320 15:42:28.532177 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:42:28 crc kubenswrapper[4730]: E0320 15:42:28.533443 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:42:29 crc kubenswrapper[4730]: I0320 15:42:29.532577 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:42:29 crc kubenswrapper[4730]: E0320 15:42:29.532718 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:42:30 crc kubenswrapper[4730]: I0320 15:42:30.532155 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:42:30 crc kubenswrapper[4730]: E0320 15:42:30.533027 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:42:30 crc kubenswrapper[4730]: I0320 15:42:30.532186 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:42:30 crc kubenswrapper[4730]: E0320 15:42:30.533603 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:42:30 crc kubenswrapper[4730]: I0320 15:42:30.532155 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:42:30 crc kubenswrapper[4730]: E0320 15:42:30.533779 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:42:31 crc kubenswrapper[4730]: I0320 15:42:31.532464 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:42:31 crc kubenswrapper[4730]: E0320 15:42:31.533830 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:42:32 crc kubenswrapper[4730]: I0320 15:42:32.532092 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:42:32 crc kubenswrapper[4730]: I0320 15:42:32.532157 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:42:32 crc kubenswrapper[4730]: I0320 15:42:32.532089 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:42:32 crc kubenswrapper[4730]: I0320 15:42:32.581066 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 20 15:42:32 crc kubenswrapper[4730]: I0320 15:42:32.581854 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 20 15:42:32 crc kubenswrapper[4730]: I0320 15:42:32.582955 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 20 15:42:32 crc kubenswrapper[4730]: I0320 15:42:32.586509 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 20 15:42:33 crc kubenswrapper[4730]: I0320 15:42:33.533172 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:42:33 crc kubenswrapper[4730]: I0320 15:42:33.534947 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 20 15:42:33 crc kubenswrapper[4730]: I0320 15:42:33.535137 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.313107 4730 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.474598 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hrm7z"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.476240 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jzx77"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.476366 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hrm7z" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.477987 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.479852 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-k6z2l"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.480380 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-k6z2l" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.491315 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.491415 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.491823 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.491825 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.491961 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.492064 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.491968 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.492064 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.492334 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.492585 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.492697 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.492896 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.493284 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.493438 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.493602 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.494118 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.494233 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.494399 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.494606 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.494831 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.495222 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.495235 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.495336 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.495522 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.501437 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.519462 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-st79s"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.519904 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-ldb64"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.520048 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.522609 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zr8dk"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.522742 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldb64" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.522810 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.522855 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.525133 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-csmvr"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.525603 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-9kgl8"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.525668 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zr8dk" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.525805 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-csmvr" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.526539 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-g7hdt"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.526722 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.526906 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfjm5"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.527414 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfjm5" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.527756 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.527999 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-g7hdt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.530366 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mkxg7"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.530712 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6rbg9"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.531050 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.531100 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rbg9" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.531232 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.531374 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mkxg7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.531680 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.531861 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.532009 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.532139 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.532705 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.533729 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-nfww4"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.534563 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-nfww4" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.535811 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pncxq"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.536454 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xspkm"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.536855 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xspkm" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.539238 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pncxq" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.541044 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.541425 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hrm7z"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.543448 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.544769 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.545057 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.545802 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.545947 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.546106 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.546490 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.546682 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.546769 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.546909 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bdpg6"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.546939 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.547016 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.547074 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.547417 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.547529 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.547555 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.547590 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.547671 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.547734 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.547770 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.547936 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.547999 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.548021 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.548066 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.548077 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.548109 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.548137 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.548109 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.548165 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.548176 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.548180 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.548224 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qdqs"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.548286 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.548304 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.548316 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.548436 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.548439 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.548459 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.548443 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.548721 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qdqs" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.549024 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-92dt7"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.549068 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.549363 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.549483 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-92dt7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.549677 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.549855 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.552624 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.552922 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.552969 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.553194 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.553287 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.553372 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.553414 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.553458 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.553196 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.553719 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.553795 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.554045 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.554133 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.554208 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.554335 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.553604 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.553658 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.553665 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.553709 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.554346 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.575961 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.576347 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mzjxx"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.576503 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.599538 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.600347 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mzjxx" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.600537 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.601339 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.601411 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.603397 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwghv"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.603902 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qxnn6"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.605816 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwghv" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.608555 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m7cfz"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.609076 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m7cfz" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.609154 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.609510 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qxnn6" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.609689 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.609907 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.610926 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk8hp"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.611470 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk8hp" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.611722 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7fsc"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.612183 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7fsc" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.614198 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.614873 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-84pdq"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.615406 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-84pdq" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.616077 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.618364 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.619319 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fjhxb"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.620194 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqxff"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.620583 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4dp6q"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.621063 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjhxb" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.621770 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.624833 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqxff" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.626202 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.630892 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7hzc8"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.631322 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jkk9s"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.631630 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4xlq"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.633139 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-w5gdn"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.635006 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4dp6q" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.635474 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7hzc8" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.635559 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4xlq" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.636394 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jkk9s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.648597 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-g7hdt"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.648681 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567010-d69bc"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.649879 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.650870 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-sxbnn"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.651188 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-w5gdn" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.652397 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567010-d69bc" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.654435 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-8n5gl"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.657127 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-klbh8"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.657935 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-sxbnn" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.660786 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7406e06c-cbde-48f9-b5e7-57a2a86b5a4d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xspkm\" (UID: \"7406e06c-cbde-48f9-b5e7-57a2a86b5a4d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xspkm" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.660846 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftpl2\" (UniqueName: \"kubernetes.io/projected/2eb5b4cf-be45-4c3d-abbf-9a3e525d5ed3-kube-api-access-ftpl2\") pod \"console-operator-58897d9998-mkxg7\" (UID: \"2eb5b4cf-be45-4c3d-abbf-9a3e525d5ed3\") " pod="openshift-console-operator/console-operator-58897d9998-mkxg7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.660886 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.660958 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2eb5b4cf-be45-4c3d-abbf-9a3e525d5ed3-trusted-ca\") pod \"console-operator-58897d9998-mkxg7\" (UID: \"2eb5b4cf-be45-4c3d-abbf-9a3e525d5ed3\") " pod="openshift-console-operator/console-operator-58897d9998-mkxg7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.660997 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d86jz\" (UniqueName: \"kubernetes.io/projected/7406e06c-cbde-48f9-b5e7-57a2a86b5a4d-kube-api-access-d86jz\") pod \"cluster-image-registry-operator-dc59b4c8b-xspkm\" (UID: \"7406e06c-cbde-48f9-b5e7-57a2a86b5a4d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xspkm" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.661054 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-console-config\") pod \"console-f9d7485db-9kgl8\" (UID: \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\") " pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.661090 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ad1f04f2-f7c4-4bc6-9daf-0db7a0809206-machine-approver-tls\") pod \"machine-approver-56656f9798-ldb64\" (UID: \"ad1f04f2-f7c4-4bc6-9daf-0db7a0809206\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldb64" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.661114 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca0985ab-94f6-4f4e-b8b4-0ee710e01fcf-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zr8dk\" (UID: \"ca0985ab-94f6-4f4e-b8b4-0ee710e01fcf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zr8dk" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.661150 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.661230 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad1f04f2-f7c4-4bc6-9daf-0db7a0809206-config\") pod \"machine-approver-56656f9798-ldb64\" (UID: \"ad1f04f2-f7c4-4bc6-9daf-0db7a0809206\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldb64" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.661349 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf-client-ca\") pod \"route-controller-manager-6576b87f9c-tgpgm\" (UID: \"2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.661235 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-klbh8" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.663192 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567022-wf5nv"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.663447 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.665352 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj7hq\" (UniqueName: \"kubernetes.io/projected/7662a0cc-faaa-47da-90f9-f3a8907a0401-kube-api-access-xj7hq\") pod \"apiserver-7bbb656c7d-nsdw7\" (UID: \"7662a0cc-faaa-47da-90f9-f3a8907a0401\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.665452 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a835e0ec-4721-4824-8846-fcc7e12db3f9-audit\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.665549 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ad1f04f2-f7c4-4bc6-9daf-0db7a0809206-auth-proxy-config\") pod \"machine-approver-56656f9798-ldb64\" (UID: \"ad1f04f2-f7c4-4bc6-9daf-0db7a0809206\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldb64" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.665583 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7662a0cc-faaa-47da-90f9-f3a8907a0401-serving-cert\") pod \"apiserver-7bbb656c7d-nsdw7\" (UID: \"7662a0cc-faaa-47da-90f9-f3a8907a0401\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.665625 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-png7p\" (UniqueName: \"kubernetes.io/projected/18214bd2-9c3a-4737-885b-2b5c905311d8-kube-api-access-png7p\") pod \"router-default-5444994796-92dt7\" (UID: \"18214bd2-9c3a-4737-885b-2b5c905311d8\") " pod="openshift-ingress/router-default-5444994796-92dt7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.688164 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8n5gl" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.687378 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4e38bce-6ae6-451b-aa9f-7a98dfa4d974-serving-cert\") pod \"openshift-config-operator-7777fb866f-6rbg9\" (UID: \"d4e38bce-6ae6-451b-aa9f-7a98dfa4d974\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rbg9" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.689379 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eb5b4cf-be45-4c3d-abbf-9a3e525d5ed3-config\") pod \"console-operator-58897d9998-mkxg7\" (UID: \"2eb5b4cf-be45-4c3d-abbf-9a3e525d5ed3\") " pod="openshift-console-operator/console-operator-58897d9998-mkxg7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.689406 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a38d833-db72-4566-b139-7788730a502a-serving-cert\") pod \"controller-manager-879f6c89f-hrm7z\" (UID: \"9a38d833-db72-4566-b139-7788730a502a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hrm7z" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.689424 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6xmg\" (UniqueName: \"kubernetes.io/projected/a250c56d-72fb-473d-98ce-c013e9d15b4a-kube-api-access-v6xmg\") pod \"dns-operator-744455d44c-nfww4\" (UID: \"a250c56d-72fb-473d-98ce-c013e9d15b4a\") " pod="openshift-dns-operator/dns-operator-744455d44c-nfww4" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.689440 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.689471 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18214bd2-9c3a-4737-885b-2b5c905311d8-default-certificate\") pod \"router-default-5444994796-92dt7\" (UID: \"18214bd2-9c3a-4737-885b-2b5c905311d8\") " pod="openshift-ingress/router-default-5444994796-92dt7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.689486 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d784e9cd-d5af-496e-abca-ce30096bb0d0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-csmvr\" (UID: \"d784e9cd-d5af-496e-abca-ce30096bb0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-csmvr" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.689501 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bc0c5b5-55bb-4339-8162-bb647b833006-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pncxq\" (UID: \"0bc0c5b5-55bb-4339-8162-bb647b833006\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pncxq" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.689516 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c0e41b3-aa2d-4083-acb2-f0f68a29fcce-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-k6z2l\" (UID: \"5c0e41b3-aa2d-4083-acb2-f0f68a29fcce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k6z2l" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.689534 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7662a0cc-faaa-47da-90f9-f3a8907a0401-etcd-client\") pod \"apiserver-7bbb656c7d-nsdw7\" (UID: \"7662a0cc-faaa-47da-90f9-f3a8907a0401\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.689556 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a835e0ec-4721-4824-8846-fcc7e12db3f9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.689570 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5c0e41b3-aa2d-4083-acb2-f0f68a29fcce-images\") pod \"machine-api-operator-5694c8668f-k6z2l\" (UID: \"5c0e41b3-aa2d-4083-acb2-f0f68a29fcce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k6z2l" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.689584 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9zgg\" (UniqueName: \"kubernetes.io/projected/d784e9cd-d5af-496e-abca-ce30096bb0d0-kube-api-access-v9zgg\") pod \"authentication-operator-69f744f599-csmvr\" (UID: \"d784e9cd-d5af-496e-abca-ce30096bb0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-csmvr" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.689602 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18214bd2-9c3a-4737-885b-2b5c905311d8-service-ca-bundle\") pod \"router-default-5444994796-92dt7\" (UID: \"18214bd2-9c3a-4737-885b-2b5c905311d8\") " pod="openshift-ingress/router-default-5444994796-92dt7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.689618 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2eb5b4cf-be45-4c3d-abbf-9a3e525d5ed3-serving-cert\") pod \"console-operator-58897d9998-mkxg7\" (UID: \"2eb5b4cf-be45-4c3d-abbf-9a3e525d5ed3\") " pod="openshift-console-operator/console-operator-58897d9998-mkxg7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.689632 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d784e9cd-d5af-496e-abca-ce30096bb0d0-serving-cert\") pod \"authentication-operator-69f744f599-csmvr\" (UID: \"d784e9cd-d5af-496e-abca-ce30096bb0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-csmvr" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.689656 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ggbs\" (UniqueName: \"kubernetes.io/projected/a835e0ec-4721-4824-8846-fcc7e12db3f9-kube-api-access-9ggbs\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.689675 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a835e0ec-4721-4824-8846-fcc7e12db3f9-config\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.689690 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf-serving-cert\") pod \"route-controller-manager-6576b87f9c-tgpgm\" (UID: \"2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.689705 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qvx5\" (UniqueName: \"kubernetes.io/projected/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-kube-api-access-4qvx5\") pod \"console-f9d7485db-9kgl8\" (UID: \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\") " pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.689723 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a835e0ec-4721-4824-8846-fcc7e12db3f9-node-pullsecrets\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.689760 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c0e41b3-aa2d-4083-acb2-f0f68a29fcce-config\") pod \"machine-api-operator-5694c8668f-k6z2l\" (UID: \"5c0e41b3-aa2d-4083-acb2-f0f68a29fcce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k6z2l" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.689775 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7b9l\" (UniqueName: \"kubernetes.io/projected/5c0e41b3-aa2d-4083-acb2-f0f68a29fcce-kube-api-access-t7b9l\") pod \"machine-api-operator-5694c8668f-k6z2l\" (UID: \"5c0e41b3-aa2d-4083-acb2-f0f68a29fcce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k6z2l" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.689810 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjf27\" (UniqueName: \"kubernetes.io/projected/d4e38bce-6ae6-451b-aa9f-7a98dfa4d974-kube-api-access-rjf27\") pod \"openshift-config-operator-7777fb866f-6rbg9\" (UID: \"d4e38bce-6ae6-451b-aa9f-7a98dfa4d974\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rbg9" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.689828 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-service-ca\") pod \"console-f9d7485db-9kgl8\" (UID: \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\") " pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.689941 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d784e9cd-d5af-496e-abca-ce30096bb0d0-config\") pod \"authentication-operator-69f744f599-csmvr\" (UID: \"d784e9cd-d5af-496e-abca-ce30096bb0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-csmvr" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.689956 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7d581333-2d6e-44d6-a6fc-b90c3b16baad-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kfjm5\" (UID: \"7d581333-2d6e-44d6-a6fc-b90c3b16baad\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfjm5" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.689970 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6vj7\" (UniqueName: \"kubernetes.io/projected/7d581333-2d6e-44d6-a6fc-b90c3b16baad-kube-api-access-q6vj7\") pod \"cluster-samples-operator-665b6dd947-kfjm5\" (UID: \"7d581333-2d6e-44d6-a6fc-b90c3b16baad\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfjm5" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.690014 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a835e0ec-4721-4824-8846-fcc7e12db3f9-encryption-config\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.690286 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18214bd2-9c3a-4737-885b-2b5c905311d8-stats-auth\") pod \"router-default-5444994796-92dt7\" (UID: \"18214bd2-9c3a-4737-885b-2b5c905311d8\") " pod="openshift-ingress/router-default-5444994796-92dt7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.690314 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a835e0ec-4721-4824-8846-fcc7e12db3f9-serving-cert\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.690335 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2499559b-b31f-4dab-89a0-964964dc596e-audit-dir\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.690365 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a38d833-db72-4566-b139-7788730a502a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hrm7z\" (UID: \"9a38d833-db72-4566-b139-7788730a502a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hrm7z" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.690405 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7662a0cc-faaa-47da-90f9-f3a8907a0401-audit-dir\") pod \"apiserver-7bbb656c7d-nsdw7\" (UID: \"7662a0cc-faaa-47da-90f9-f3a8907a0401\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.690433 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64zrc\" (UniqueName: \"kubernetes.io/projected/e190e098-9bc8-492f-9657-f6ccfb836f23-kube-api-access-64zrc\") pod \"packageserver-d55dfcdfc-2qdqs\" (UID: \"e190e098-9bc8-492f-9657-f6ccfb836f23\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qdqs" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.690484 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bc0c5b5-55bb-4339-8162-bb647b833006-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pncxq\" (UID: \"0bc0c5b5-55bb-4339-8162-bb647b833006\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pncxq" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.690504 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-console-oauth-config\") pod \"console-f9d7485db-9kgl8\" (UID: \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\") " pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.690531 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.690568 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7406e06c-cbde-48f9-b5e7-57a2a86b5a4d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xspkm\" (UID: \"7406e06c-cbde-48f9-b5e7-57a2a86b5a4d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xspkm" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.690601 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.690662 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.690687 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d784e9cd-d5af-496e-abca-ce30096bb0d0-service-ca-bundle\") pod \"authentication-operator-69f744f599-csmvr\" (UID: \"d784e9cd-d5af-496e-abca-ce30096bb0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-csmvr" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.690720 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca0985ab-94f6-4f4e-b8b4-0ee710e01fcf-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zr8dk\" (UID: \"ca0985ab-94f6-4f4e-b8b4-0ee710e01fcf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zr8dk" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.690767 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a38d833-db72-4566-b139-7788730a502a-config\") pod \"controller-manager-879f6c89f-hrm7z\" (UID: \"9a38d833-db72-4566-b139-7788730a502a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hrm7z" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.690789 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7662a0cc-faaa-47da-90f9-f3a8907a0401-encryption-config\") pod \"apiserver-7bbb656c7d-nsdw7\" (UID: \"7662a0cc-faaa-47da-90f9-f3a8907a0401\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.690845 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.690849 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b24b\" (UniqueName: \"kubernetes.io/projected/ad1f04f2-f7c4-4bc6-9daf-0db7a0809206-kube-api-access-9b24b\") pod \"machine-approver-56656f9798-ldb64\" (UID: \"ad1f04f2-f7c4-4bc6-9daf-0db7a0809206\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldb64" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.690926 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a250c56d-72fb-473d-98ce-c013e9d15b4a-metrics-tls\") pod \"dns-operator-744455d44c-nfww4\" (UID: \"a250c56d-72fb-473d-98ce-c013e9d15b4a\") " pod="openshift-dns-operator/dns-operator-744455d44c-nfww4" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.690948 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ck55\" (UniqueName: \"kubernetes.io/projected/0bc0c5b5-55bb-4339-8162-bb647b833006-kube-api-access-7ck55\") pod \"openshift-controller-manager-operator-756b6f6bc6-pncxq\" (UID: \"0bc0c5b5-55bb-4339-8162-bb647b833006\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pncxq" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.690972 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpv47\" (UniqueName: \"kubernetes.io/projected/d32c9cec-9f6c-4304-8bc9-d2e52128470a-kube-api-access-zpv47\") pod \"downloads-7954f5f757-g7hdt\" (UID: \"d32c9cec-9f6c-4304-8bc9-d2e52128470a\") " pod="openshift-console/downloads-7954f5f757-g7hdt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.690995 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-oauth-serving-cert\") pod \"console-f9d7485db-9kgl8\" (UID: \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\") " pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.691022 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4lws\" (UniqueName: \"kubernetes.io/projected/ca0985ab-94f6-4f4e-b8b4-0ee710e01fcf-kube-api-access-v4lws\") pod \"openshift-apiserver-operator-796bbdcf4f-zr8dk\" (UID: \"ca0985ab-94f6-4f4e-b8b4-0ee710e01fcf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zr8dk" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.691044 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a835e0ec-4721-4824-8846-fcc7e12db3f9-audit-dir\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.691063 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e190e098-9bc8-492f-9657-f6ccfb836f23-apiservice-cert\") pod \"packageserver-d55dfcdfc-2qdqs\" (UID: \"e190e098-9bc8-492f-9657-f6ccfb836f23\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qdqs" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.691088 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.691109 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18214bd2-9c3a-4737-885b-2b5c905311d8-metrics-certs\") pod \"router-default-5444994796-92dt7\" (UID: \"18214bd2-9c3a-4737-885b-2b5c905311d8\") " pod="openshift-ingress/router-default-5444994796-92dt7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.691145 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a835e0ec-4721-4824-8846-fcc7e12db3f9-etcd-client\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.691175 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7406e06c-cbde-48f9-b5e7-57a2a86b5a4d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xspkm\" (UID: \"7406e06c-cbde-48f9-b5e7-57a2a86b5a4d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xspkm" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.691202 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a38d833-db72-4566-b139-7788730a502a-client-ca\") pod \"controller-manager-879f6c89f-hrm7z\" (UID: \"9a38d833-db72-4566-b139-7788730a502a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hrm7z" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.691223 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7662a0cc-faaa-47da-90f9-f3a8907a0401-audit-policies\") pod \"apiserver-7bbb656c7d-nsdw7\" (UID: \"7662a0cc-faaa-47da-90f9-f3a8907a0401\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.691269 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.691293 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.691314 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.691338 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e190e098-9bc8-492f-9657-f6ccfb836f23-webhook-cert\") pod \"packageserver-d55dfcdfc-2qdqs\" (UID: \"e190e098-9bc8-492f-9657-f6ccfb836f23\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qdqs" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.691370 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d4e38bce-6ae6-451b-aa9f-7a98dfa4d974-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6rbg9\" (UID: \"d4e38bce-6ae6-451b-aa9f-7a98dfa4d974\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rbg9" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.691390 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2499559b-b31f-4dab-89a0-964964dc596e-audit-policies\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.691409 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5z67\" (UniqueName: \"kubernetes.io/projected/2499559b-b31f-4dab-89a0-964964dc596e-kube-api-access-l5z67\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.691430 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-console-serving-cert\") pod \"console-f9d7485db-9kgl8\" (UID: \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\") " pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.691449 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.691467 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7662a0cc-faaa-47da-90f9-f3a8907a0401-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nsdw7\" (UID: \"7662a0cc-faaa-47da-90f9-f3a8907a0401\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.691500 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a835e0ec-4721-4824-8846-fcc7e12db3f9-etcd-serving-ca\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.691518 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf-config\") pod \"route-controller-manager-6576b87f9c-tgpgm\" (UID: \"2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.691540 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb6sw\" (UniqueName: \"kubernetes.io/projected/2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf-kube-api-access-wb6sw\") pod \"route-controller-manager-6576b87f9c-tgpgm\" (UID: \"2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.691576 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7662a0cc-faaa-47da-90f9-f3a8907a0401-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nsdw7\" (UID: \"7662a0cc-faaa-47da-90f9-f3a8907a0401\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.691597 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a835e0ec-4721-4824-8846-fcc7e12db3f9-image-import-ca\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.691615 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e190e098-9bc8-492f-9657-f6ccfb836f23-tmpfs\") pod \"packageserver-d55dfcdfc-2qdqs\" (UID: \"e190e098-9bc8-492f-9657-f6ccfb836f23\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qdqs" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.691633 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-trusted-ca-bundle\") pod \"console-f9d7485db-9kgl8\" (UID: \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\") " pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.691692 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq5zm\" (UniqueName: \"kubernetes.io/projected/9a38d833-db72-4566-b139-7788730a502a-kube-api-access-tq5zm\") pod \"controller-manager-879f6c89f-hrm7z\" (UID: \"9a38d833-db72-4566-b139-7788730a502a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hrm7z" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.704139 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-c2cgf"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.704626 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567022-wf5nv" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.705507 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9kgl8"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.705536 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pncxq"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.705553 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xspkm"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.705844 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-c2cgf" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.712420 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-csmvr"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.712479 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mkxg7"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.712493 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfjm5"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.717454 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.717744 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jzx77"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.718779 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwghv"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.720559 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zr8dk"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.721192 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.721990 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-st79s"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.722631 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7fsc"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.723783 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m7cfz"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.726031 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fjhxb"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.727487 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qdqs"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.727537 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6rbg9"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.728700 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qxnn6"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.730558 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567010-d69bc"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.732543 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mzjxx"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.733446 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4xlq"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.733472 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bdpg6"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.735525 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk8hp"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.735552 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-84pdq"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.738506 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.743987 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4dp6q"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.745709 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-nfww4"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.747166 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-c2cgf"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.748548 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-k6z2l"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.748604 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-7ckfm"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.749337 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqxff"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.749361 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jkk9s"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.749374 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-sxbnn"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.749384 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-w5gdn"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.749393 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7hzc8"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.749401 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-7ckfm"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.749410 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567022-wf5nv"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.749418 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-klbh8"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.749490 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-7ckfm" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.750134 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-6w7m9"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.750898 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6w7m9" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.751157 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jgzlv"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.752082 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jgzlv" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.752102 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6w7m9"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.753178 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jgzlv"] Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.758428 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.778602 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.792374 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d86jz\" (UniqueName: \"kubernetes.io/projected/7406e06c-cbde-48f9-b5e7-57a2a86b5a4d-kube-api-access-d86jz\") pod \"cluster-image-registry-operator-dc59b4c8b-xspkm\" (UID: \"7406e06c-cbde-48f9-b5e7-57a2a86b5a4d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xspkm" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.792410 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-console-config\") pod \"console-f9d7485db-9kgl8\" (UID: \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\") " pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.792430 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.792448 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2eb5b4cf-be45-4c3d-abbf-9a3e525d5ed3-trusted-ca\") pod \"console-operator-58897d9998-mkxg7\" (UID: \"2eb5b4cf-be45-4c3d-abbf-9a3e525d5ed3\") " pod="openshift-console-operator/console-operator-58897d9998-mkxg7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.792467 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ad1f04f2-f7c4-4bc6-9daf-0db7a0809206-machine-approver-tls\") pod \"machine-approver-56656f9798-ldb64\" (UID: \"ad1f04f2-f7c4-4bc6-9daf-0db7a0809206\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldb64" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.792482 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca0985ab-94f6-4f4e-b8b4-0ee710e01fcf-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zr8dk\" (UID: \"ca0985ab-94f6-4f4e-b8b4-0ee710e01fcf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zr8dk" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.792498 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.792515 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad1f04f2-f7c4-4bc6-9daf-0db7a0809206-config\") pod \"machine-approver-56656f9798-ldb64\" (UID: \"ad1f04f2-f7c4-4bc6-9daf-0db7a0809206\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldb64" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.793186 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad1f04f2-f7c4-4bc6-9daf-0db7a0809206-config\") pod \"machine-approver-56656f9798-ldb64\" (UID: \"ad1f04f2-f7c4-4bc6-9daf-0db7a0809206\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldb64" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.793237 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf-client-ca\") pod \"route-controller-manager-6576b87f9c-tgpgm\" (UID: \"2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.793288 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj7hq\" (UniqueName: \"kubernetes.io/projected/7662a0cc-faaa-47da-90f9-f3a8907a0401-kube-api-access-xj7hq\") pod \"apiserver-7bbb656c7d-nsdw7\" (UID: \"7662a0cc-faaa-47da-90f9-f3a8907a0401\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.793307 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ad1f04f2-f7c4-4bc6-9daf-0db7a0809206-auth-proxy-config\") pod \"machine-approver-56656f9798-ldb64\" (UID: \"ad1f04f2-f7c4-4bc6-9daf-0db7a0809206\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldb64" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.793323 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7662a0cc-faaa-47da-90f9-f3a8907a0401-serving-cert\") pod \"apiserver-7bbb656c7d-nsdw7\" (UID: \"7662a0cc-faaa-47da-90f9-f3a8907a0401\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.793858 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a835e0ec-4721-4824-8846-fcc7e12db3f9-audit\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.793929 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4e38bce-6ae6-451b-aa9f-7a98dfa4d974-serving-cert\") pod \"openshift-config-operator-7777fb866f-6rbg9\" (UID: \"d4e38bce-6ae6-451b-aa9f-7a98dfa4d974\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rbg9" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.793952 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-png7p\" (UniqueName: \"kubernetes.io/projected/18214bd2-9c3a-4737-885b-2b5c905311d8-kube-api-access-png7p\") pod \"router-default-5444994796-92dt7\" (UID: \"18214bd2-9c3a-4737-885b-2b5c905311d8\") " pod="openshift-ingress/router-default-5444994796-92dt7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.793968 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a38d833-db72-4566-b139-7788730a502a-serving-cert\") pod \"controller-manager-879f6c89f-hrm7z\" (UID: \"9a38d833-db72-4566-b139-7788730a502a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hrm7z" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.793989 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6xmg\" (UniqueName: \"kubernetes.io/projected/a250c56d-72fb-473d-98ce-c013e9d15b4a-kube-api-access-v6xmg\") pod \"dns-operator-744455d44c-nfww4\" (UID: \"a250c56d-72fb-473d-98ce-c013e9d15b4a\") " pod="openshift-dns-operator/dns-operator-744455d44c-nfww4" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.793656 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf-client-ca\") pod \"route-controller-manager-6576b87f9c-tgpgm\" (UID: \"2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.794008 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.793759 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2eb5b4cf-be45-4c3d-abbf-9a3e525d5ed3-trusted-ca\") pod \"console-operator-58897d9998-mkxg7\" (UID: \"2eb5b4cf-be45-4c3d-abbf-9a3e525d5ed3\") " pod="openshift-console-operator/console-operator-58897d9998-mkxg7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.794045 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eb5b4cf-be45-4c3d-abbf-9a3e525d5ed3-config\") pod \"console-operator-58897d9998-mkxg7\" (UID: \"2eb5b4cf-be45-4c3d-abbf-9a3e525d5ed3\") " pod="openshift-console-operator/console-operator-58897d9998-mkxg7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.794072 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18214bd2-9c3a-4737-885b-2b5c905311d8-default-certificate\") pod \"router-default-5444994796-92dt7\" (UID: \"18214bd2-9c3a-4737-885b-2b5c905311d8\") " pod="openshift-ingress/router-default-5444994796-92dt7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.794091 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d784e9cd-d5af-496e-abca-ce30096bb0d0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-csmvr\" (UID: \"d784e9cd-d5af-496e-abca-ce30096bb0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-csmvr" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.794108 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7662a0cc-faaa-47da-90f9-f3a8907a0401-etcd-client\") pod \"apiserver-7bbb656c7d-nsdw7\" (UID: \"7662a0cc-faaa-47da-90f9-f3a8907a0401\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.794125 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bc0c5b5-55bb-4339-8162-bb647b833006-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pncxq\" (UID: \"0bc0c5b5-55bb-4339-8162-bb647b833006\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pncxq" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.794124 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ad1f04f2-f7c4-4bc6-9daf-0db7a0809206-auth-proxy-config\") pod \"machine-approver-56656f9798-ldb64\" (UID: \"ad1f04f2-f7c4-4bc6-9daf-0db7a0809206\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldb64" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.794144 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c0e41b3-aa2d-4083-acb2-f0f68a29fcce-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-k6z2l\" (UID: \"5c0e41b3-aa2d-4083-acb2-f0f68a29fcce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k6z2l" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.794218 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18214bd2-9c3a-4737-885b-2b5c905311d8-service-ca-bundle\") pod \"router-default-5444994796-92dt7\" (UID: \"18214bd2-9c3a-4737-885b-2b5c905311d8\") " pod="openshift-ingress/router-default-5444994796-92dt7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.794240 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a835e0ec-4721-4824-8846-fcc7e12db3f9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.794283 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5c0e41b3-aa2d-4083-acb2-f0f68a29fcce-images\") pod \"machine-api-operator-5694c8668f-k6z2l\" (UID: \"5c0e41b3-aa2d-4083-acb2-f0f68a29fcce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k6z2l" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.794307 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9zgg\" (UniqueName: \"kubernetes.io/projected/d784e9cd-d5af-496e-abca-ce30096bb0d0-kube-api-access-v9zgg\") pod \"authentication-operator-69f744f599-csmvr\" (UID: \"d784e9cd-d5af-496e-abca-ce30096bb0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-csmvr" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.794335 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2eb5b4cf-be45-4c3d-abbf-9a3e525d5ed3-serving-cert\") pod \"console-operator-58897d9998-mkxg7\" (UID: \"2eb5b4cf-be45-4c3d-abbf-9a3e525d5ed3\") " pod="openshift-console-operator/console-operator-58897d9998-mkxg7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.794374 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ggbs\" (UniqueName: \"kubernetes.io/projected/a835e0ec-4721-4824-8846-fcc7e12db3f9-kube-api-access-9ggbs\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.794400 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d784e9cd-d5af-496e-abca-ce30096bb0d0-serving-cert\") pod \"authentication-operator-69f744f599-csmvr\" (UID: \"d784e9cd-d5af-496e-abca-ce30096bb0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-csmvr" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.794424 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a835e0ec-4721-4824-8846-fcc7e12db3f9-node-pullsecrets\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.794461 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a835e0ec-4721-4824-8846-fcc7e12db3f9-config\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.794502 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf-serving-cert\") pod \"route-controller-manager-6576b87f9c-tgpgm\" (UID: \"2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.794530 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qvx5\" (UniqueName: \"kubernetes.io/projected/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-kube-api-access-4qvx5\") pod \"console-f9d7485db-9kgl8\" (UID: \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\") " pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.794554 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c0e41b3-aa2d-4083-acb2-f0f68a29fcce-config\") pod \"machine-api-operator-5694c8668f-k6z2l\" (UID: \"5c0e41b3-aa2d-4083-acb2-f0f68a29fcce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k6z2l" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.794581 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7b9l\" (UniqueName: \"kubernetes.io/projected/5c0e41b3-aa2d-4083-acb2-f0f68a29fcce-kube-api-access-t7b9l\") pod \"machine-api-operator-5694c8668f-k6z2l\" (UID: \"5c0e41b3-aa2d-4083-acb2-f0f68a29fcce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k6z2l" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.794585 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a835e0ec-4721-4824-8846-fcc7e12db3f9-audit\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.794950 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eb5b4cf-be45-4c3d-abbf-9a3e525d5ed3-config\") pod \"console-operator-58897d9998-mkxg7\" (UID: \"2eb5b4cf-be45-4c3d-abbf-9a3e525d5ed3\") " pod="openshift-console-operator/console-operator-58897d9998-mkxg7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.795558 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-service-ca\") pod \"console-f9d7485db-9kgl8\" (UID: \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\") " pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.794598 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-service-ca\") pod \"console-f9d7485db-9kgl8\" (UID: \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\") " pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.795647 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d784e9cd-d5af-496e-abca-ce30096bb0d0-config\") pod \"authentication-operator-69f744f599-csmvr\" (UID: \"d784e9cd-d5af-496e-abca-ce30096bb0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-csmvr" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.795689 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjf27\" (UniqueName: \"kubernetes.io/projected/d4e38bce-6ae6-451b-aa9f-7a98dfa4d974-kube-api-access-rjf27\") pod \"openshift-config-operator-7777fb866f-6rbg9\" (UID: \"d4e38bce-6ae6-451b-aa9f-7a98dfa4d974\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rbg9" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.795727 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a835e0ec-4721-4824-8846-fcc7e12db3f9-encryption-config\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.795758 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7d581333-2d6e-44d6-a6fc-b90c3b16baad-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kfjm5\" (UID: \"7d581333-2d6e-44d6-a6fc-b90c3b16baad\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfjm5" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.795786 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6vj7\" (UniqueName: \"kubernetes.io/projected/7d581333-2d6e-44d6-a6fc-b90c3b16baad-kube-api-access-q6vj7\") pod \"cluster-samples-operator-665b6dd947-kfjm5\" (UID: \"7d581333-2d6e-44d6-a6fc-b90c3b16baad\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfjm5" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.795814 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a38d833-db72-4566-b139-7788730a502a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hrm7z\" (UID: \"9a38d833-db72-4566-b139-7788730a502a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hrm7z" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.795840 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7662a0cc-faaa-47da-90f9-f3a8907a0401-audit-dir\") pod \"apiserver-7bbb656c7d-nsdw7\" (UID: \"7662a0cc-faaa-47da-90f9-f3a8907a0401\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.795867 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18214bd2-9c3a-4737-885b-2b5c905311d8-stats-auth\") pod \"router-default-5444994796-92dt7\" (UID: \"18214bd2-9c3a-4737-885b-2b5c905311d8\") " pod="openshift-ingress/router-default-5444994796-92dt7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.795925 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a835e0ec-4721-4824-8846-fcc7e12db3f9-serving-cert\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.795965 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2499559b-b31f-4dab-89a0-964964dc596e-audit-dir\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.795985 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64zrc\" (UniqueName: \"kubernetes.io/projected/e190e098-9bc8-492f-9657-f6ccfb836f23-kube-api-access-64zrc\") pod \"packageserver-d55dfcdfc-2qdqs\" (UID: \"e190e098-9bc8-492f-9657-f6ccfb836f23\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qdqs" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796004 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bc0c5b5-55bb-4339-8162-bb647b833006-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pncxq\" (UID: \"0bc0c5b5-55bb-4339-8162-bb647b833006\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pncxq" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796021 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-console-oauth-config\") pod \"console-f9d7485db-9kgl8\" (UID: \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\") " pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796042 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7406e06c-cbde-48f9-b5e7-57a2a86b5a4d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xspkm\" (UID: \"7406e06c-cbde-48f9-b5e7-57a2a86b5a4d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xspkm" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796061 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796093 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796116 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca0985ab-94f6-4f4e-b8b4-0ee710e01fcf-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zr8dk\" (UID: \"ca0985ab-94f6-4f4e-b8b4-0ee710e01fcf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zr8dk" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796133 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a38d833-db72-4566-b139-7788730a502a-config\") pod \"controller-manager-879f6c89f-hrm7z\" (UID: \"9a38d833-db72-4566-b139-7788730a502a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hrm7z" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796156 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7662a0cc-faaa-47da-90f9-f3a8907a0401-encryption-config\") pod \"apiserver-7bbb656c7d-nsdw7\" (UID: \"7662a0cc-faaa-47da-90f9-f3a8907a0401\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796175 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796193 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d784e9cd-d5af-496e-abca-ce30096bb0d0-service-ca-bundle\") pod \"authentication-operator-69f744f599-csmvr\" (UID: \"d784e9cd-d5af-496e-abca-ce30096bb0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-csmvr" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796216 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ck55\" (UniqueName: \"kubernetes.io/projected/0bc0c5b5-55bb-4339-8162-bb647b833006-kube-api-access-7ck55\") pod \"openshift-controller-manager-operator-756b6f6bc6-pncxq\" (UID: \"0bc0c5b5-55bb-4339-8162-bb647b833006\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pncxq" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796234 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b24b\" (UniqueName: \"kubernetes.io/projected/ad1f04f2-f7c4-4bc6-9daf-0db7a0809206-kube-api-access-9b24b\") pod \"machine-approver-56656f9798-ldb64\" (UID: \"ad1f04f2-f7c4-4bc6-9daf-0db7a0809206\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldb64" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796255 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a250c56d-72fb-473d-98ce-c013e9d15b4a-metrics-tls\") pod \"dns-operator-744455d44c-nfww4\" (UID: \"a250c56d-72fb-473d-98ce-c013e9d15b4a\") " pod="openshift-dns-operator/dns-operator-744455d44c-nfww4" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796287 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpv47\" (UniqueName: \"kubernetes.io/projected/d32c9cec-9f6c-4304-8bc9-d2e52128470a-kube-api-access-zpv47\") pod \"downloads-7954f5f757-g7hdt\" (UID: \"d32c9cec-9f6c-4304-8bc9-d2e52128470a\") " pod="openshift-console/downloads-7954f5f757-g7hdt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796307 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-oauth-serving-cert\") pod \"console-f9d7485db-9kgl8\" (UID: \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\") " pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796332 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4lws\" (UniqueName: \"kubernetes.io/projected/ca0985ab-94f6-4f4e-b8b4-0ee710e01fcf-kube-api-access-v4lws\") pod \"openshift-apiserver-operator-796bbdcf4f-zr8dk\" (UID: \"ca0985ab-94f6-4f4e-b8b4-0ee710e01fcf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zr8dk" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796350 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a835e0ec-4721-4824-8846-fcc7e12db3f9-audit-dir\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796366 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e190e098-9bc8-492f-9657-f6ccfb836f23-apiservice-cert\") pod \"packageserver-d55dfcdfc-2qdqs\" (UID: \"e190e098-9bc8-492f-9657-f6ccfb836f23\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qdqs" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796384 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796409 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a38d833-db72-4566-b139-7788730a502a-client-ca\") pod \"controller-manager-879f6c89f-hrm7z\" (UID: \"9a38d833-db72-4566-b139-7788730a502a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hrm7z" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796427 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7662a0cc-faaa-47da-90f9-f3a8907a0401-audit-policies\") pod \"apiserver-7bbb656c7d-nsdw7\" (UID: \"7662a0cc-faaa-47da-90f9-f3a8907a0401\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796446 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18214bd2-9c3a-4737-885b-2b5c905311d8-metrics-certs\") pod \"router-default-5444994796-92dt7\" (UID: \"18214bd2-9c3a-4737-885b-2b5c905311d8\") " pod="openshift-ingress/router-default-5444994796-92dt7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796461 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a835e0ec-4721-4824-8846-fcc7e12db3f9-etcd-client\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796480 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7406e06c-cbde-48f9-b5e7-57a2a86b5a4d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xspkm\" (UID: \"7406e06c-cbde-48f9-b5e7-57a2a86b5a4d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xspkm" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796512 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796505 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a835e0ec-4721-4824-8846-fcc7e12db3f9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796535 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e190e098-9bc8-492f-9657-f6ccfb836f23-webhook-cert\") pod \"packageserver-d55dfcdfc-2qdqs\" (UID: \"e190e098-9bc8-492f-9657-f6ccfb836f23\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qdqs" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796557 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d4e38bce-6ae6-451b-aa9f-7a98dfa4d974-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6rbg9\" (UID: \"d4e38bce-6ae6-451b-aa9f-7a98dfa4d974\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rbg9" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796574 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2499559b-b31f-4dab-89a0-964964dc596e-audit-policies\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796593 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796611 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796629 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-console-serving-cert\") pod \"console-f9d7485db-9kgl8\" (UID: \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\") " pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796647 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796665 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5z67\" (UniqueName: \"kubernetes.io/projected/2499559b-b31f-4dab-89a0-964964dc596e-kube-api-access-l5z67\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796684 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7662a0cc-faaa-47da-90f9-f3a8907a0401-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nsdw7\" (UID: \"7662a0cc-faaa-47da-90f9-f3a8907a0401\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796701 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a835e0ec-4721-4824-8846-fcc7e12db3f9-etcd-serving-ca\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796730 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf-config\") pod \"route-controller-manager-6576b87f9c-tgpgm\" (UID: \"2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796749 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7662a0cc-faaa-47da-90f9-f3a8907a0401-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nsdw7\" (UID: \"7662a0cc-faaa-47da-90f9-f3a8907a0401\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796767 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb6sw\" (UniqueName: \"kubernetes.io/projected/2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf-kube-api-access-wb6sw\") pod \"route-controller-manager-6576b87f9c-tgpgm\" (UID: \"2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796801 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq5zm\" (UniqueName: \"kubernetes.io/projected/9a38d833-db72-4566-b139-7788730a502a-kube-api-access-tq5zm\") pod \"controller-manager-879f6c89f-hrm7z\" (UID: \"9a38d833-db72-4566-b139-7788730a502a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hrm7z" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796819 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a835e0ec-4721-4824-8846-fcc7e12db3f9-image-import-ca\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796930 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e190e098-9bc8-492f-9657-f6ccfb836f23-tmpfs\") pod \"packageserver-d55dfcdfc-2qdqs\" (UID: \"e190e098-9bc8-492f-9657-f6ccfb836f23\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qdqs" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796949 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-trusted-ca-bundle\") pod \"console-f9d7485db-9kgl8\" (UID: \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\") " pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.796970 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftpl2\" (UniqueName: \"kubernetes.io/projected/2eb5b4cf-be45-4c3d-abbf-9a3e525d5ed3-kube-api-access-ftpl2\") pod \"console-operator-58897d9998-mkxg7\" (UID: \"2eb5b4cf-be45-4c3d-abbf-9a3e525d5ed3\") " pod="openshift-console-operator/console-operator-58897d9998-mkxg7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.797003 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7406e06c-cbde-48f9-b5e7-57a2a86b5a4d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xspkm\" (UID: \"7406e06c-cbde-48f9-b5e7-57a2a86b5a4d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xspkm" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.797342 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5c0e41b3-aa2d-4083-acb2-f0f68a29fcce-images\") pod \"machine-api-operator-5694c8668f-k6z2l\" (UID: \"5c0e41b3-aa2d-4083-acb2-f0f68a29fcce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k6z2l" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.797825 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7662a0cc-faaa-47da-90f9-f3a8907a0401-serving-cert\") pod \"apiserver-7bbb656c7d-nsdw7\" (UID: \"7662a0cc-faaa-47da-90f9-f3a8907a0401\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.797851 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d784e9cd-d5af-496e-abca-ce30096bb0d0-config\") pod \"authentication-operator-69f744f599-csmvr\" (UID: \"d784e9cd-d5af-496e-abca-ce30096bb0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-csmvr" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.797912 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ad1f04f2-f7c4-4bc6-9daf-0db7a0809206-machine-approver-tls\") pod \"machine-approver-56656f9798-ldb64\" (UID: \"ad1f04f2-f7c4-4bc6-9daf-0db7a0809206\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldb64" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.798481 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4e38bce-6ae6-451b-aa9f-7a98dfa4d974-serving-cert\") pod \"openshift-config-operator-7777fb866f-6rbg9\" (UID: \"d4e38bce-6ae6-451b-aa9f-7a98dfa4d974\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rbg9" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.798646 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a835e0ec-4721-4824-8846-fcc7e12db3f9-config\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.798718 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a835e0ec-4721-4824-8846-fcc7e12db3f9-node-pullsecrets\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.799154 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bc0c5b5-55bb-4339-8162-bb647b833006-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pncxq\" (UID: \"0bc0c5b5-55bb-4339-8162-bb647b833006\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pncxq" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.799346 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c0e41b3-aa2d-4083-acb2-f0f68a29fcce-config\") pod \"machine-api-operator-5694c8668f-k6z2l\" (UID: \"5c0e41b3-aa2d-4083-acb2-f0f68a29fcce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k6z2l" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.793540 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.799585 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-console-config\") pod \"console-f9d7485db-9kgl8\" (UID: \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\") " pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.799940 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d4e38bce-6ae6-451b-aa9f-7a98dfa4d974-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6rbg9\" (UID: \"d4e38bce-6ae6-451b-aa9f-7a98dfa4d974\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rbg9" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.800108 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a38d833-db72-4566-b139-7788730a502a-serving-cert\") pod \"controller-manager-879f6c89f-hrm7z\" (UID: \"9a38d833-db72-4566-b139-7788730a502a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hrm7z" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.810415 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7662a0cc-faaa-47da-90f9-f3a8907a0401-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nsdw7\" (UID: \"7662a0cc-faaa-47da-90f9-f3a8907a0401\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.810768 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca0985ab-94f6-4f4e-b8b4-0ee710e01fcf-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zr8dk\" (UID: \"ca0985ab-94f6-4f4e-b8b4-0ee710e01fcf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zr8dk" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.797011 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d784e9cd-d5af-496e-abca-ce30096bb0d0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-csmvr\" (UID: \"d784e9cd-d5af-496e-abca-ce30096bb0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-csmvr" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.811072 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7662a0cc-faaa-47da-90f9-f3a8907a0401-etcd-client\") pod \"apiserver-7bbb656c7d-nsdw7\" (UID: \"7662a0cc-faaa-47da-90f9-f3a8907a0401\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.811377 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2eb5b4cf-be45-4c3d-abbf-9a3e525d5ed3-serving-cert\") pod \"console-operator-58897d9998-mkxg7\" (UID: \"2eb5b4cf-be45-4c3d-abbf-9a3e525d5ed3\") " pod="openshift-console-operator/console-operator-58897d9998-mkxg7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.811630 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-console-serving-cert\") pod \"console-f9d7485db-9kgl8\" (UID: \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\") " pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.811665 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7406e06c-cbde-48f9-b5e7-57a2a86b5a4d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xspkm\" (UID: \"7406e06c-cbde-48f9-b5e7-57a2a86b5a4d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xspkm" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.811985 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.812126 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7d581333-2d6e-44d6-a6fc-b90c3b16baad-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kfjm5\" (UID: \"7d581333-2d6e-44d6-a6fc-b90c3b16baad\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfjm5" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.812433 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a835e0ec-4721-4824-8846-fcc7e12db3f9-encryption-config\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.812467 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf-serving-cert\") pod \"route-controller-manager-6576b87f9c-tgpgm\" (UID: \"2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.812752 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2499559b-b31f-4dab-89a0-964964dc596e-audit-policies\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.813295 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7662a0cc-faaa-47da-90f9-f3a8907a0401-audit-policies\") pod \"apiserver-7bbb656c7d-nsdw7\" (UID: \"7662a0cc-faaa-47da-90f9-f3a8907a0401\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.813412 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d784e9cd-d5af-496e-abca-ce30096bb0d0-serving-cert\") pod \"authentication-operator-69f744f599-csmvr\" (UID: \"d784e9cd-d5af-496e-abca-ce30096bb0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-csmvr" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.813292 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.814756 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a835e0ec-4721-4824-8846-fcc7e12db3f9-image-import-ca\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.814903 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf-config\") pod \"route-controller-manager-6576b87f9c-tgpgm\" (UID: \"2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.815656 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7662a0cc-faaa-47da-90f9-f3a8907a0401-audit-dir\") pod \"apiserver-7bbb656c7d-nsdw7\" (UID: \"7662a0cc-faaa-47da-90f9-f3a8907a0401\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.815673 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.815821 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7662a0cc-faaa-47da-90f9-f3a8907a0401-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nsdw7\" (UID: \"7662a0cc-faaa-47da-90f9-f3a8907a0401\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.815836 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.816088 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2499559b-b31f-4dab-89a0-964964dc596e-audit-dir\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.816152 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a38d833-db72-4566-b139-7788730a502a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hrm7z\" (UID: \"9a38d833-db72-4566-b139-7788730a502a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hrm7z" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.816992 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-trusted-ca-bundle\") pod \"console-f9d7485db-9kgl8\" (UID: \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\") " pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.817630 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d784e9cd-d5af-496e-abca-ce30096bb0d0-service-ca-bundle\") pod \"authentication-operator-69f744f599-csmvr\" (UID: \"d784e9cd-d5af-496e-abca-ce30096bb0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-csmvr" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.817726 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a835e0ec-4721-4824-8846-fcc7e12db3f9-audit-dir\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.817942 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e190e098-9bc8-492f-9657-f6ccfb836f23-tmpfs\") pod \"packageserver-d55dfcdfc-2qdqs\" (UID: \"e190e098-9bc8-492f-9657-f6ccfb836f23\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qdqs" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.818508 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.818566 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a38d833-db72-4566-b139-7788730a502a-client-ca\") pod \"controller-manager-879f6c89f-hrm7z\" (UID: \"9a38d833-db72-4566-b139-7788730a502a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hrm7z" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.818599 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7406e06c-cbde-48f9-b5e7-57a2a86b5a4d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xspkm\" (UID: \"7406e06c-cbde-48f9-b5e7-57a2a86b5a4d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xspkm" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.818819 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.819199 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-oauth-serving-cert\") pod \"console-f9d7485db-9kgl8\" (UID: \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\") " pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.820187 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a835e0ec-4721-4824-8846-fcc7e12db3f9-etcd-serving-ca\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.820470 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca0985ab-94f6-4f4e-b8b4-0ee710e01fcf-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zr8dk\" (UID: \"ca0985ab-94f6-4f4e-b8b4-0ee710e01fcf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zr8dk" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.821085 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7662a0cc-faaa-47da-90f9-f3a8907a0401-encryption-config\") pod \"apiserver-7bbb656c7d-nsdw7\" (UID: \"7662a0cc-faaa-47da-90f9-f3a8907a0401\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.824854 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bc0c5b5-55bb-4339-8162-bb647b833006-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pncxq\" (UID: \"0bc0c5b5-55bb-4339-8162-bb647b833006\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pncxq" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.825035 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e190e098-9bc8-492f-9657-f6ccfb836f23-apiservice-cert\") pod \"packageserver-d55dfcdfc-2qdqs\" (UID: \"e190e098-9bc8-492f-9657-f6ccfb836f23\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qdqs" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.825118 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a835e0ec-4721-4824-8846-fcc7e12db3f9-etcd-client\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.825433 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a835e0ec-4721-4824-8846-fcc7e12db3f9-serving-cert\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.825752 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a250c56d-72fb-473d-98ce-c013e9d15b4a-metrics-tls\") pod \"dns-operator-744455d44c-nfww4\" (UID: \"a250c56d-72fb-473d-98ce-c013e9d15b4a\") " pod="openshift-dns-operator/dns-operator-744455d44c-nfww4" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.826293 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e190e098-9bc8-492f-9657-f6ccfb836f23-webhook-cert\") pod \"packageserver-d55dfcdfc-2qdqs\" (UID: \"e190e098-9bc8-492f-9657-f6ccfb836f23\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qdqs" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.839697 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.858507 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.862283 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a38d833-db72-4566-b139-7788730a502a-config\") pod \"controller-manager-879f6c89f-hrm7z\" (UID: \"9a38d833-db72-4566-b139-7788730a502a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hrm7z" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.862557 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-console-oauth-config\") pod \"console-f9d7485db-9kgl8\" (UID: \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\") " pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.862987 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c0e41b3-aa2d-4083-acb2-f0f68a29fcce-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-k6z2l\" (UID: \"5c0e41b3-aa2d-4083-acb2-f0f68a29fcce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k6z2l" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.867481 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.867818 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.868176 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.868507 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.869480 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.870006 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18214bd2-9c3a-4737-885b-2b5c905311d8-default-certificate\") pod \"router-default-5444994796-92dt7\" (UID: \"18214bd2-9c3a-4737-885b-2b5c905311d8\") " pod="openshift-ingress/router-default-5444994796-92dt7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.870693 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.878656 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.898490 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.908581 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18214bd2-9c3a-4737-885b-2b5c905311d8-stats-auth\") pod \"router-default-5444994796-92dt7\" (UID: \"18214bd2-9c3a-4737-885b-2b5c905311d8\") " pod="openshift-ingress/router-default-5444994796-92dt7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.919042 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.926747 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18214bd2-9c3a-4737-885b-2b5c905311d8-metrics-certs\") pod \"router-default-5444994796-92dt7\" (UID: \"18214bd2-9c3a-4737-885b-2b5c905311d8\") " pod="openshift-ingress/router-default-5444994796-92dt7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.938711 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.959039 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.966019 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18214bd2-9c3a-4737-885b-2b5c905311d8-service-ca-bundle\") pod \"router-default-5444994796-92dt7\" (UID: \"18214bd2-9c3a-4737-885b-2b5c905311d8\") " pod="openshift-ingress/router-default-5444994796-92dt7" Mar 20 15:42:36 crc kubenswrapper[4730]: I0320 15:42:36.979554 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.018542 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.039758 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.059140 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.078624 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.099146 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.119916 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.139650 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.158372 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.179039 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.198774 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.219060 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.239270 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.259588 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.279440 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.299476 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.318368 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.339099 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.359655 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.380436 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.401386 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.419970 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.440589 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.459631 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.490728 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.499410 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.518546 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.539583 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.560271 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.579313 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.599814 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.620084 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.637129 4730 request.go:700] Waited for 1.001713222s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&limit=500&resourceVersion=0 Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.639840 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.660037 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.679963 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.699296 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.719594 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.739911 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.759182 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.778784 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.799859 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.818615 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.838049 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.858275 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.878905 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.898540 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.920030 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.939037 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.959679 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.978323 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 20 15:42:37 crc kubenswrapper[4730]: I0320 15:42:37.998697 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.019782 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.039017 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.058939 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.078366 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.098231 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.124027 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.138966 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.158273 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.178709 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.198544 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.218812 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.258669 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.278089 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.297631 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.318023 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.338006 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.358021 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.378550 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.397622 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.418054 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.438326 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.458158 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.478396 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.498126 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.518843 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.538347 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.558738 4730 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.578933 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.613073 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d86jz\" (UniqueName: \"kubernetes.io/projected/7406e06c-cbde-48f9-b5e7-57a2a86b5a4d-kube-api-access-d86jz\") pod \"cluster-image-registry-operator-dc59b4c8b-xspkm\" (UID: \"7406e06c-cbde-48f9-b5e7-57a2a86b5a4d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xspkm" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.631517 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj7hq\" (UniqueName: \"kubernetes.io/projected/7662a0cc-faaa-47da-90f9-f3a8907a0401-kube-api-access-xj7hq\") pod \"apiserver-7bbb656c7d-nsdw7\" (UID: \"7662a0cc-faaa-47da-90f9-f3a8907a0401\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.637213 4730 request.go:700] Waited for 1.843014254s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/serviceaccounts/router/token Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.653400 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-png7p\" (UniqueName: \"kubernetes.io/projected/18214bd2-9c3a-4737-885b-2b5c905311d8-kube-api-access-png7p\") pod \"router-default-5444994796-92dt7\" (UID: \"18214bd2-9c3a-4737-885b-2b5c905311d8\") " pod="openshift-ingress/router-default-5444994796-92dt7" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.682530 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7406e06c-cbde-48f9-b5e7-57a2a86b5a4d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xspkm\" (UID: \"7406e06c-cbde-48f9-b5e7-57a2a86b5a4d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xspkm" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.691667 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9zgg\" (UniqueName: \"kubernetes.io/projected/d784e9cd-d5af-496e-abca-ce30096bb0d0-kube-api-access-v9zgg\") pod \"authentication-operator-69f744f599-csmvr\" (UID: \"d784e9cd-d5af-496e-abca-ce30096bb0d0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-csmvr" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.695292 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-92dt7" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.711995 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjf27\" (UniqueName: \"kubernetes.io/projected/d4e38bce-6ae6-451b-aa9f-7a98dfa4d974-kube-api-access-rjf27\") pod \"openshift-config-operator-7777fb866f-6rbg9\" (UID: \"d4e38bce-6ae6-451b-aa9f-7a98dfa4d974\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rbg9" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.731753 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qvx5\" (UniqueName: \"kubernetes.io/projected/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-kube-api-access-4qvx5\") pod \"console-f9d7485db-9kgl8\" (UID: \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\") " pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.756311 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ggbs\" (UniqueName: \"kubernetes.io/projected/a835e0ec-4721-4824-8846-fcc7e12db3f9-kube-api-access-9ggbs\") pod \"apiserver-76f77b778f-jzx77\" (UID: \"a835e0ec-4721-4824-8846-fcc7e12db3f9\") " pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.784636 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7b9l\" (UniqueName: \"kubernetes.io/projected/5c0e41b3-aa2d-4083-acb2-f0f68a29fcce-kube-api-access-t7b9l\") pod \"machine-api-operator-5694c8668f-k6z2l\" (UID: \"5c0e41b3-aa2d-4083-acb2-f0f68a29fcce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k6z2l" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.792482 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.798384 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6xmg\" (UniqueName: \"kubernetes.io/projected/a250c56d-72fb-473d-98ce-c013e9d15b4a-kube-api-access-v6xmg\") pod \"dns-operator-744455d44c-nfww4\" (UID: \"a250c56d-72fb-473d-98ce-c013e9d15b4a\") " pod="openshift-dns-operator/dns-operator-744455d44c-nfww4" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.818862 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6vj7\" (UniqueName: \"kubernetes.io/projected/7d581333-2d6e-44d6-a6fc-b90c3b16baad-kube-api-access-q6vj7\") pod \"cluster-samples-operator-665b6dd947-kfjm5\" (UID: \"7d581333-2d6e-44d6-a6fc-b90c3b16baad\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfjm5" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.832802 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfjm5" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.839905 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb6sw\" (UniqueName: \"kubernetes.io/projected/2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf-kube-api-access-wb6sw\") pod \"route-controller-manager-6576b87f9c-tgpgm\" (UID: \"2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.840428 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.860232 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq5zm\" (UniqueName: \"kubernetes.io/projected/9a38d833-db72-4566-b139-7788730a502a-kube-api-access-tq5zm\") pod \"controller-manager-879f6c89f-hrm7z\" (UID: \"9a38d833-db72-4566-b139-7788730a502a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hrm7z" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.869298 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-csmvr" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.874461 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64zrc\" (UniqueName: \"kubernetes.io/projected/e190e098-9bc8-492f-9657-f6ccfb836f23-kube-api-access-64zrc\") pod \"packageserver-d55dfcdfc-2qdqs\" (UID: \"e190e098-9bc8-492f-9657-f6ccfb836f23\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qdqs" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.887125 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-nfww4" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.891446 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xspkm" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.904333 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5z67\" (UniqueName: \"kubernetes.io/projected/2499559b-b31f-4dab-89a0-964964dc596e-kube-api-access-l5z67\") pod \"oauth-openshift-558db77b4-st79s\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.917180 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftpl2\" (UniqueName: \"kubernetes.io/projected/2eb5b4cf-be45-4c3d-abbf-9a3e525d5ed3-kube-api-access-ftpl2\") pod \"console-operator-58897d9998-mkxg7\" (UID: \"2eb5b4cf-be45-4c3d-abbf-9a3e525d5ed3\") " pod="openshift-console-operator/console-operator-58897d9998-mkxg7" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.919820 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hrm7z" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.936912 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b24b\" (UniqueName: \"kubernetes.io/projected/ad1f04f2-f7c4-4bc6-9daf-0db7a0809206-kube-api-access-9b24b\") pod \"machine-approver-56656f9798-ldb64\" (UID: \"ad1f04f2-f7c4-4bc6-9daf-0db7a0809206\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldb64" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.942056 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.952996 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-k6z2l" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.954353 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ck55\" (UniqueName: \"kubernetes.io/projected/0bc0c5b5-55bb-4339-8162-bb647b833006-kube-api-access-7ck55\") pod \"openshift-controller-manager-operator-756b6f6bc6-pncxq\" (UID: \"0bc0c5b5-55bb-4339-8162-bb647b833006\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pncxq" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.958797 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qdqs" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.975044 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpv47\" (UniqueName: \"kubernetes.io/projected/d32c9cec-9f6c-4304-8bc9-d2e52128470a-kube-api-access-zpv47\") pod \"downloads-7954f5f757-g7hdt\" (UID: \"d32c9cec-9f6c-4304-8bc9-d2e52128470a\") " pod="openshift-console/downloads-7954f5f757-g7hdt" Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.981318 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7"] Mar 20 15:42:38 crc kubenswrapper[4730]: I0320 15:42:38.993625 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4lws\" (UniqueName: \"kubernetes.io/projected/ca0985ab-94f6-4f4e-b8b4-0ee710e01fcf-kube-api-access-v4lws\") pod \"openshift-apiserver-operator-796bbdcf4f-zr8dk\" (UID: \"ca0985ab-94f6-4f4e-b8b4-0ee710e01fcf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zr8dk" Mar 20 15:42:39 crc kubenswrapper[4730]: W0320 15:42:39.001581 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7662a0cc_faaa_47da_90f9_f3a8907a0401.slice/crio-aca6a436c7afd8408d9c66f89c190d3570325670190e36d191782e17a273a429 WatchSource:0}: Error finding container aca6a436c7afd8408d9c66f89c190d3570325670190e36d191782e17a273a429: Status 404 returned error can't find the container with id aca6a436c7afd8408d9c66f89c190d3570325670190e36d191782e17a273a429 Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.008537 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rbg9" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.027790 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-bound-sa-token\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.027840 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.027862 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l78n7\" (UniqueName: \"kubernetes.io/projected/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-kube-api-access-l78n7\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.027889 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.027910 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.027947 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-registry-tls\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.027966 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-registry-certificates\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.028003 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-trusted-ca\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:39 crc kubenswrapper[4730]: E0320 15:42:39.028304 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:39.528293746 +0000 UTC m=+218.741665115 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.050468 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.060473 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldb64" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.104370 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-csmvr"] Mar 20 15:42:39 crc kubenswrapper[4730]: W0320 15:42:39.114406 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad1f04f2_f7c4_4bc6_9daf_0db7a0809206.slice/crio-cadb9ce1c6e0e9f7fc23aed07ca259c9113f4c4927998e0ff33fed07306ab4ff WatchSource:0}: Error finding container cadb9ce1c6e0e9f7fc23aed07ca259c9113f4c4927998e0ff33fed07306ab4ff: Status 404 returned error can't find the container with id cadb9ce1c6e0e9f7fc23aed07ca259c9113f4c4927998e0ff33fed07306ab4ff Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.117342 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.121698 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zr8dk" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.130636 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.130925 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ac46477-04bc-4d0a-b28e-b687c690dd5a-config\") pod \"kube-apiserver-operator-766d6c64bb-qwghv\" (UID: \"9ac46477-04bc-4d0a-b28e-b687c690dd5a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwghv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.130966 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afbc673a-2498-49dc-b98e-d7ddc58d2999-config\") pod \"kube-controller-manager-operator-78b949d7b-mzjxx\" (UID: \"afbc673a-2498-49dc-b98e-d7ddc58d2999\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mzjxx" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.131022 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a268a97-bf49-4ed6-b239-1a088c3c4e4f-cert\") pod \"ingress-canary-6w7m9\" (UID: \"8a268a97-bf49-4ed6-b239-1a088c3c4e4f\") " pod="openshift-ingress-canary/ingress-canary-6w7m9" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.131062 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-bound-sa-token\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.131093 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzkcx\" (UniqueName: \"kubernetes.io/projected/8e224294-495e-4d65-96f2-8e0d2a444ef1-kube-api-access-pzkcx\") pod \"machine-config-controller-84d6567774-7hzc8\" (UID: \"8e224294-495e-4d65-96f2-8e0d2a444ef1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7hzc8" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.131123 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0a030e24-2337-49a2-a5e2-118714cd7ff9-node-bootstrap-token\") pod \"machine-config-server-8n5gl\" (UID: \"0a030e24-2337-49a2-a5e2-118714cd7ff9\") " pod="openshift-machine-config-operator/machine-config-server-8n5gl" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.131156 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d975f8-1a1e-4921-aef0-3c4652992a02-serving-cert\") pod \"etcd-operator-b45778765-w5gdn\" (UID: \"c2d975f8-1a1e-4921-aef0-3c4652992a02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5gdn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.131196 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5612cc7f-9299-43b4-b97c-cf579a416e84-proxy-tls\") pod \"machine-config-operator-74547568cd-4dp6q\" (UID: \"5612cc7f-9299-43b4-b97c-cf579a416e84\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4dp6q" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.131216 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e224294-495e-4d65-96f2-8e0d2a444ef1-proxy-tls\") pod \"machine-config-controller-84d6567774-7hzc8\" (UID: \"8e224294-495e-4d65-96f2-8e0d2a444ef1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7hzc8" Mar 20 15:42:39 crc kubenswrapper[4730]: E0320 15:42:39.132193 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:39.632171824 +0000 UTC m=+218.845543193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.133345 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l78n7\" (UniqueName: \"kubernetes.io/projected/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-kube-api-access-l78n7\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.133381 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-klbh8\" (UID: \"e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3\") " pod="openshift-marketplace/marketplace-operator-79b997595-klbh8" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.133444 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs4lb\" (UniqueName: \"kubernetes.io/projected/8a268a97-bf49-4ed6-b239-1a088c3c4e4f-kube-api-access-gs4lb\") pod \"ingress-canary-6w7m9\" (UID: \"8a268a97-bf49-4ed6-b239-1a088c3c4e4f\") " pod="openshift-ingress-canary/ingress-canary-6w7m9" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.133481 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.133516 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be19fb65-a04f-42df-9b96-e620b58754bb-secret-volume\") pod \"collect-profiles-29567010-d69bc\" (UID: \"be19fb65-a04f-42df-9b96-e620b58754bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567010-d69bc" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.133548 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt52k\" (UniqueName: \"kubernetes.io/projected/f6f09179-5752-4a5a-ab79-72a176bbdd9a-kube-api-access-rt52k\") pod \"olm-operator-6b444d44fb-m7fsc\" (UID: \"f6f09179-5752-4a5a-ab79-72a176bbdd9a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7fsc" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.133572 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbb6ff6b-d521-408a-831c-a6a9c524a671-trusted-ca\") pod \"ingress-operator-5b745b69d9-fjhxb\" (UID: \"dbb6ff6b-d521-408a-831c-a6a9c524a671\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjhxb" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.133592 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8nxk\" (UniqueName: \"kubernetes.io/projected/dbb6ff6b-d521-408a-831c-a6a9c524a671-kube-api-access-c8nxk\") pod \"ingress-operator-5b745b69d9-fjhxb\" (UID: \"dbb6ff6b-d521-408a-831c-a6a9c524a671\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjhxb" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.133618 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/428fa435-b92e-4363-82bb-40316d3e0a26-registration-dir\") pod \"csi-hostpathplugin-jgzlv\" (UID: \"428fa435-b92e-4363-82bb-40316d3e0a26\") " pod="hostpath-provisioner/csi-hostpathplugin-jgzlv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.133637 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6bfj\" (UniqueName: \"kubernetes.io/projected/1010c304-4912-42b2-aa8c-17d44c4bf6cb-kube-api-access-g6bfj\") pod \"service-ca-9c57cc56f-sxbnn\" (UID: \"1010c304-4912-42b2-aa8c-17d44c4bf6cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-sxbnn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.133684 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df4fa0ea-abb1-49ea-8d74-2992c71c1a0e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vqxff\" (UID: \"df4fa0ea-abb1-49ea-8d74-2992c71c1a0e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqxff" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.133724 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.133772 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f6f09179-5752-4a5a-ab79-72a176bbdd9a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-m7fsc\" (UID: \"f6f09179-5752-4a5a-ab79-72a176bbdd9a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7fsc" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.133807 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f6f09179-5752-4a5a-ab79-72a176bbdd9a-srv-cert\") pod \"olm-operator-6b444d44fb-m7fsc\" (UID: \"f6f09179-5752-4a5a-ab79-72a176bbdd9a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7fsc" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.136017 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.137430 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c2d975f8-1a1e-4921-aef0-3c4652992a02-etcd-ca\") pod \"etcd-operator-b45778765-w5gdn\" (UID: \"c2d975f8-1a1e-4921-aef0-3c4652992a02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5gdn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.137542 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afbc673a-2498-49dc-b98e-d7ddc58d2999-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-mzjxx\" (UID: \"afbc673a-2498-49dc-b98e-d7ddc58d2999\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mzjxx" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.137575 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46gxc\" (UniqueName: \"kubernetes.io/projected/7d87adfe-3206-4175-8d8f-5a00015cc61e-kube-api-access-46gxc\") pod \"auto-csr-approver-29567022-wf5nv\" (UID: \"7d87adfe-3206-4175-8d8f-5a00015cc61e\") " pod="openshift-infra/auto-csr-approver-29567022-wf5nv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.138210 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj2f6\" (UniqueName: \"kubernetes.io/projected/c9f80b42-cff3-48a7-9e09-02ff65e9d9f8-kube-api-access-zj2f6\") pod \"control-plane-machine-set-operator-78cbb6b69f-jkk9s\" (UID: \"c9f80b42-cff3-48a7-9e09-02ff65e9d9f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jkk9s" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.138281 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d04de14b-8e96-44ab-818f-2b08d78d2e14-metrics-tls\") pod \"dns-default-7ckfm\" (UID: \"d04de14b-8e96-44ab-818f-2b08d78d2e14\") " pod="openshift-dns/dns-default-7ckfm" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.138322 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b82f2ec3-df30-4b45-be3a-9858edb2bb7f-config\") pod \"service-ca-operator-777779d784-c2cgf\" (UID: \"b82f2ec3-df30-4b45-be3a-9858edb2bb7f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c2cgf" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.138350 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4e2a7090-33b8-4137-be83-5c2e5ab1ccc7-profile-collector-cert\") pod \"catalog-operator-68c6474976-m7cfz\" (UID: \"4e2a7090-33b8-4137-be83-5c2e5ab1ccc7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m7cfz" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.138374 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9a5496e-57aa-4f42-b53d-590fb534d26e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sk8hp\" (UID: \"d9a5496e-57aa-4f42-b53d-590fb534d26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk8hp" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.138397 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c2d975f8-1a1e-4921-aef0-3c4652992a02-etcd-service-ca\") pod \"etcd-operator-b45778765-w5gdn\" (UID: \"c2d975f8-1a1e-4921-aef0-3c4652992a02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5gdn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.138460 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df4fa0ea-abb1-49ea-8d74-2992c71c1a0e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vqxff\" (UID: \"df4fa0ea-abb1-49ea-8d74-2992c71c1a0e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqxff" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.138486 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7wjq\" (UniqueName: \"kubernetes.io/projected/d9a5496e-57aa-4f42-b53d-590fb534d26e-kube-api-access-r7wjq\") pod \"kube-storage-version-migrator-operator-b67b599dd-sk8hp\" (UID: \"d9a5496e-57aa-4f42-b53d-590fb534d26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk8hp" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.138513 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-klbh8\" (UID: \"e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3\") " pod="openshift-marketplace/marketplace-operator-79b997595-klbh8" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.138543 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-registry-certificates\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.138647 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ac46477-04bc-4d0a-b28e-b687c690dd5a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qwghv\" (UID: \"9ac46477-04bc-4d0a-b28e-b687c690dd5a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwghv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.138677 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c2d975f8-1a1e-4921-aef0-3c4652992a02-etcd-client\") pod \"etcd-operator-b45778765-w5gdn\" (UID: \"c2d975f8-1a1e-4921-aef0-3c4652992a02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5gdn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.138705 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2083343b-2ec0-4306-a0a5-f74dd0f63746-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-m4xlq\" (UID: \"2083343b-2ec0-4306-a0a5-f74dd0f63746\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4xlq" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.138731 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxflj\" (UniqueName: \"kubernetes.io/projected/e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3-kube-api-access-jxflj\") pod \"marketplace-operator-79b997595-klbh8\" (UID: \"e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3\") " pod="openshift-marketplace/marketplace-operator-79b997595-klbh8" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.140612 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzvgx\" (UniqueName: \"kubernetes.io/projected/428fa435-b92e-4363-82bb-40316d3e0a26-kube-api-access-kzvgx\") pod \"csi-hostpathplugin-jgzlv\" (UID: \"428fa435-b92e-4363-82bb-40316d3e0a26\") " pod="hostpath-provisioner/csi-hostpathplugin-jgzlv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.140656 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afbc673a-2498-49dc-b98e-d7ddc58d2999-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-mzjxx\" (UID: \"afbc673a-2498-49dc-b98e-d7ddc58d2999\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mzjxx" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.140887 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-trusted-ca\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.140965 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbnh9\" (UniqueName: \"kubernetes.io/projected/be19fb65-a04f-42df-9b96-e620b58754bb-kube-api-access-wbnh9\") pod \"collect-profiles-29567010-d69bc\" (UID: \"be19fb65-a04f-42df-9b96-e620b58754bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567010-d69bc" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.141107 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njkjz\" (UniqueName: \"kubernetes.io/projected/c2d975f8-1a1e-4921-aef0-3c4652992a02-kube-api-access-njkjz\") pod \"etcd-operator-b45778765-w5gdn\" (UID: \"c2d975f8-1a1e-4921-aef0-3c4652992a02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5gdn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.141231 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-registry-certificates\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.141934 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4e2a7090-33b8-4137-be83-5c2e5ab1ccc7-srv-cert\") pod \"catalog-operator-68c6474976-m7cfz\" (UID: \"4e2a7090-33b8-4137-be83-5c2e5ab1ccc7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m7cfz" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.141958 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6cmt\" (UniqueName: \"kubernetes.io/projected/4e2a7090-33b8-4137-be83-5c2e5ab1ccc7-kube-api-access-b6cmt\") pod \"catalog-operator-68c6474976-m7cfz\" (UID: \"4e2a7090-33b8-4137-be83-5c2e5ab1ccc7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m7cfz" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.141974 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8jhd\" (UniqueName: \"kubernetes.io/projected/2083343b-2ec0-4306-a0a5-f74dd0f63746-kube-api-access-n8jhd\") pod \"package-server-manager-789f6589d5-m4xlq\" (UID: \"2083343b-2ec0-4306-a0a5-f74dd0f63746\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4xlq" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.141989 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df4fa0ea-abb1-49ea-8d74-2992c71c1a0e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vqxff\" (UID: \"df4fa0ea-abb1-49ea-8d74-2992c71c1a0e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqxff" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.142019 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/428fa435-b92e-4363-82bb-40316d3e0a26-plugins-dir\") pod \"csi-hostpathplugin-jgzlv\" (UID: \"428fa435-b92e-4363-82bb-40316d3e0a26\") " pod="hostpath-provisioner/csi-hostpathplugin-jgzlv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.142052 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.146897 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-trusted-ca\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.147589 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.147828 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/428fa435-b92e-4363-82bb-40316d3e0a26-csi-data-dir\") pod \"csi-hostpathplugin-jgzlv\" (UID: \"428fa435-b92e-4363-82bb-40316d3e0a26\") " pod="hostpath-provisioner/csi-hostpathplugin-jgzlv" Mar 20 15:42:39 crc kubenswrapper[4730]: E0320 15:42:39.148231 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:39.648210148 +0000 UTC m=+218.861581507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.148313 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxgbn\" (UniqueName: \"kubernetes.io/projected/49896a92-a6b0-45ea-a736-09a368d90be4-kube-api-access-xxgbn\") pod \"migrator-59844c95c7-qxnn6\" (UID: \"49896a92-a6b0-45ea-a736-09a368d90be4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qxnn6" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.149190 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be19fb65-a04f-42df-9b96-e620b58754bb-config-volume\") pod \"collect-profiles-29567010-d69bc\" (UID: \"be19fb65-a04f-42df-9b96-e620b58754bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567010-d69bc" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.149535 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ac46477-04bc-4d0a-b28e-b687c690dd5a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qwghv\" (UID: \"9ac46477-04bc-4d0a-b28e-b687c690dd5a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwghv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.149778 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e213a906-8ad6-45c1-b832-a42d58fd91c6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-84pdq\" (UID: \"e213a906-8ad6-45c1-b832-a42d58fd91c6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-84pdq" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.149804 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5612cc7f-9299-43b4-b97c-cf579a416e84-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4dp6q\" (UID: \"5612cc7f-9299-43b4-b97c-cf579a416e84\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4dp6q" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.149834 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1010c304-4912-42b2-aa8c-17d44c4bf6cb-signing-cabundle\") pod \"service-ca-9c57cc56f-sxbnn\" (UID: \"1010c304-4912-42b2-aa8c-17d44c4bf6cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-sxbnn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.149865 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d04de14b-8e96-44ab-818f-2b08d78d2e14-config-volume\") pod \"dns-default-7ckfm\" (UID: \"d04de14b-8e96-44ab-818f-2b08d78d2e14\") " pod="openshift-dns/dns-default-7ckfm" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.150662 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/428fa435-b92e-4363-82bb-40316d3e0a26-socket-dir\") pod \"csi-hostpathplugin-jgzlv\" (UID: \"428fa435-b92e-4363-82bb-40316d3e0a26\") " pod="hostpath-provisioner/csi-hostpathplugin-jgzlv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.150716 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/428fa435-b92e-4363-82bb-40316d3e0a26-mountpoint-dir\") pod \"csi-hostpathplugin-jgzlv\" (UID: \"428fa435-b92e-4363-82bb-40316d3e0a26\") " pod="hostpath-provisioner/csi-hostpathplugin-jgzlv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.150996 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz2n6\" (UniqueName: \"kubernetes.io/projected/0a030e24-2337-49a2-a5e2-118714cd7ff9-kube-api-access-wz2n6\") pod \"machine-config-server-8n5gl\" (UID: \"0a030e24-2337-49a2-a5e2-118714cd7ff9\") " pod="openshift-machine-config-operator/machine-config-server-8n5gl" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.151433 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbb6ff6b-d521-408a-831c-a6a9c524a671-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fjhxb\" (UID: \"dbb6ff6b-d521-408a-831c-a6a9c524a671\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjhxb" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.158400 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1010c304-4912-42b2-aa8c-17d44c4bf6cb-signing-key\") pod \"service-ca-9c57cc56f-sxbnn\" (UID: \"1010c304-4912-42b2-aa8c-17d44c4bf6cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-sxbnn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.158478 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r75nz\" (UniqueName: \"kubernetes.io/projected/d04de14b-8e96-44ab-818f-2b08d78d2e14-kube-api-access-r75nz\") pod \"dns-default-7ckfm\" (UID: \"d04de14b-8e96-44ab-818f-2b08d78d2e14\") " pod="openshift-dns/dns-default-7ckfm" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.158514 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d975f8-1a1e-4921-aef0-3c4652992a02-config\") pod \"etcd-operator-b45778765-w5gdn\" (UID: \"c2d975f8-1a1e-4921-aef0-3c4652992a02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5gdn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.158581 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b82f2ec3-df30-4b45-be3a-9858edb2bb7f-serving-cert\") pod \"service-ca-operator-777779d784-c2cgf\" (UID: \"b82f2ec3-df30-4b45-be3a-9858edb2bb7f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c2cgf" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.158728 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-g7hdt" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.158904 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9f80b42-cff3-48a7-9e09-02ff65e9d9f8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jkk9s\" (UID: \"c9f80b42-cff3-48a7-9e09-02ff65e9d9f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jkk9s" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.158941 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dbb6ff6b-d521-408a-831c-a6a9c524a671-metrics-tls\") pod \"ingress-operator-5b745b69d9-fjhxb\" (UID: \"dbb6ff6b-d521-408a-831c-a6a9c524a671\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjhxb" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.159049 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkbfb\" (UniqueName: \"kubernetes.io/projected/e213a906-8ad6-45c1-b832-a42d58fd91c6-kube-api-access-bkbfb\") pod \"multus-admission-controller-857f4d67dd-84pdq\" (UID: \"e213a906-8ad6-45c1-b832-a42d58fd91c6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-84pdq" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.160198 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-registry-tls\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.161507 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9a5496e-57aa-4f42-b53d-590fb534d26e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sk8hp\" (UID: \"d9a5496e-57aa-4f42-b53d-590fb534d26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk8hp" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.161634 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mkxg7" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.161945 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5612cc7f-9299-43b4-b97c-cf579a416e84-images\") pod \"machine-config-operator-74547568cd-4dp6q\" (UID: \"5612cc7f-9299-43b4-b97c-cf579a416e84\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4dp6q" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.162498 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbshk\" (UniqueName: \"kubernetes.io/projected/5612cc7f-9299-43b4-b97c-cf579a416e84-kube-api-access-fbshk\") pod \"machine-config-operator-74547568cd-4dp6q\" (UID: \"5612cc7f-9299-43b4-b97c-cf579a416e84\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4dp6q" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.163074 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e224294-495e-4d65-96f2-8e0d2a444ef1-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7hzc8\" (UID: \"8e224294-495e-4d65-96f2-8e0d2a444ef1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7hzc8" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.163108 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0a030e24-2337-49a2-a5e2-118714cd7ff9-certs\") pod \"machine-config-server-8n5gl\" (UID: \"0a030e24-2337-49a2-a5e2-118714cd7ff9\") " pod="openshift-machine-config-operator/machine-config-server-8n5gl" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.163171 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27wgn\" (UniqueName: \"kubernetes.io/projected/b82f2ec3-df30-4b45-be3a-9858edb2bb7f-kube-api-access-27wgn\") pod \"service-ca-operator-777779d784-c2cgf\" (UID: \"b82f2ec3-df30-4b45-be3a-9858edb2bb7f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c2cgf" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.163231 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-bound-sa-token\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.174227 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-registry-tls\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.176631 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pncxq" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.177479 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l78n7\" (UniqueName: \"kubernetes.io/projected/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-kube-api-access-l78n7\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.236660 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldb64" event={"ID":"ad1f04f2-f7c4-4bc6-9daf-0db7a0809206","Type":"ContainerStarted","Data":"cadb9ce1c6e0e9f7fc23aed07ca259c9113f4c4927998e0ff33fed07306ab4ff"} Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.242199 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jzx77"] Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.245259 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-csmvr" event={"ID":"d784e9cd-d5af-496e-abca-ce30096bb0d0","Type":"ContainerStarted","Data":"ca69c531049adbce31f8306da31b6663856abafbfe3b5b42a37934246712933c"} Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.246895 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" event={"ID":"7662a0cc-faaa-47da-90f9-f3a8907a0401","Type":"ContainerStarted","Data":"aca6a436c7afd8408d9c66f89c190d3570325670190e36d191782e17a273a429"} Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.248320 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-92dt7" event={"ID":"18214bd2-9c3a-4737-885b-2b5c905311d8","Type":"ContainerStarted","Data":"abcdbf5a3476ecc0ddcea4614869525f19124592633fbe93a76d20aee50f11ac"} Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.248350 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-92dt7" event={"ID":"18214bd2-9c3a-4737-885b-2b5c905311d8","Type":"ContainerStarted","Data":"d2f20db5f347dbd8ba3429cb3c60dcc2ce2ee189be84df39412513494c4307a4"} Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.264504 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:39 crc kubenswrapper[4730]: E0320 15:42:39.264635 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:39.764615772 +0000 UTC m=+218.977987141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.264780 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7wjq\" (UniqueName: \"kubernetes.io/projected/d9a5496e-57aa-4f42-b53d-590fb534d26e-kube-api-access-r7wjq\") pod \"kube-storage-version-migrator-operator-b67b599dd-sk8hp\" (UID: \"d9a5496e-57aa-4f42-b53d-590fb534d26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk8hp" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.264812 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-klbh8\" (UID: \"e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3\") " pod="openshift-marketplace/marketplace-operator-79b997595-klbh8" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.264840 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ac46477-04bc-4d0a-b28e-b687c690dd5a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qwghv\" (UID: \"9ac46477-04bc-4d0a-b28e-b687c690dd5a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwghv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.264865 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c2d975f8-1a1e-4921-aef0-3c4652992a02-etcd-client\") pod \"etcd-operator-b45778765-w5gdn\" (UID: \"c2d975f8-1a1e-4921-aef0-3c4652992a02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5gdn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.264888 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzvgx\" (UniqueName: \"kubernetes.io/projected/428fa435-b92e-4363-82bb-40316d3e0a26-kube-api-access-kzvgx\") pod \"csi-hostpathplugin-jgzlv\" (UID: \"428fa435-b92e-4363-82bb-40316d3e0a26\") " pod="hostpath-provisioner/csi-hostpathplugin-jgzlv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.264909 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afbc673a-2498-49dc-b98e-d7ddc58d2999-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-mzjxx\" (UID: \"afbc673a-2498-49dc-b98e-d7ddc58d2999\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mzjxx" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.264932 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2083343b-2ec0-4306-a0a5-f74dd0f63746-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-m4xlq\" (UID: \"2083343b-2ec0-4306-a0a5-f74dd0f63746\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4xlq" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.264954 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxflj\" (UniqueName: \"kubernetes.io/projected/e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3-kube-api-access-jxflj\") pod \"marketplace-operator-79b997595-klbh8\" (UID: \"e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3\") " pod="openshift-marketplace/marketplace-operator-79b997595-klbh8" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.264980 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njkjz\" (UniqueName: \"kubernetes.io/projected/c2d975f8-1a1e-4921-aef0-3c4652992a02-kube-api-access-njkjz\") pod \"etcd-operator-b45778765-w5gdn\" (UID: \"c2d975f8-1a1e-4921-aef0-3c4652992a02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5gdn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265005 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbnh9\" (UniqueName: \"kubernetes.io/projected/be19fb65-a04f-42df-9b96-e620b58754bb-kube-api-access-wbnh9\") pod \"collect-profiles-29567010-d69bc\" (UID: \"be19fb65-a04f-42df-9b96-e620b58754bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567010-d69bc" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265029 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4e2a7090-33b8-4137-be83-5c2e5ab1ccc7-srv-cert\") pod \"catalog-operator-68c6474976-m7cfz\" (UID: \"4e2a7090-33b8-4137-be83-5c2e5ab1ccc7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m7cfz" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265048 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6cmt\" (UniqueName: \"kubernetes.io/projected/4e2a7090-33b8-4137-be83-5c2e5ab1ccc7-kube-api-access-b6cmt\") pod \"catalog-operator-68c6474976-m7cfz\" (UID: \"4e2a7090-33b8-4137-be83-5c2e5ab1ccc7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m7cfz" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265071 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8jhd\" (UniqueName: \"kubernetes.io/projected/2083343b-2ec0-4306-a0a5-f74dd0f63746-kube-api-access-n8jhd\") pod \"package-server-manager-789f6589d5-m4xlq\" (UID: \"2083343b-2ec0-4306-a0a5-f74dd0f63746\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4xlq" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265092 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/428fa435-b92e-4363-82bb-40316d3e0a26-plugins-dir\") pod \"csi-hostpathplugin-jgzlv\" (UID: \"428fa435-b92e-4363-82bb-40316d3e0a26\") " pod="hostpath-provisioner/csi-hostpathplugin-jgzlv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265115 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df4fa0ea-abb1-49ea-8d74-2992c71c1a0e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vqxff\" (UID: \"df4fa0ea-abb1-49ea-8d74-2992c71c1a0e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqxff" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265141 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265163 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/428fa435-b92e-4363-82bb-40316d3e0a26-csi-data-dir\") pod \"csi-hostpathplugin-jgzlv\" (UID: \"428fa435-b92e-4363-82bb-40316d3e0a26\") " pod="hostpath-provisioner/csi-hostpathplugin-jgzlv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265186 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxgbn\" (UniqueName: \"kubernetes.io/projected/49896a92-a6b0-45ea-a736-09a368d90be4-kube-api-access-xxgbn\") pod \"migrator-59844c95c7-qxnn6\" (UID: \"49896a92-a6b0-45ea-a736-09a368d90be4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qxnn6" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265209 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be19fb65-a04f-42df-9b96-e620b58754bb-config-volume\") pod \"collect-profiles-29567010-d69bc\" (UID: \"be19fb65-a04f-42df-9b96-e620b58754bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567010-d69bc" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265231 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ac46477-04bc-4d0a-b28e-b687c690dd5a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qwghv\" (UID: \"9ac46477-04bc-4d0a-b28e-b687c690dd5a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwghv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265277 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e213a906-8ad6-45c1-b832-a42d58fd91c6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-84pdq\" (UID: \"e213a906-8ad6-45c1-b832-a42d58fd91c6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-84pdq" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265301 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5612cc7f-9299-43b4-b97c-cf579a416e84-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4dp6q\" (UID: \"5612cc7f-9299-43b4-b97c-cf579a416e84\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4dp6q" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265324 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/428fa435-b92e-4363-82bb-40316d3e0a26-socket-dir\") pod \"csi-hostpathplugin-jgzlv\" (UID: \"428fa435-b92e-4363-82bb-40316d3e0a26\") " pod="hostpath-provisioner/csi-hostpathplugin-jgzlv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265344 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/428fa435-b92e-4363-82bb-40316d3e0a26-mountpoint-dir\") pod \"csi-hostpathplugin-jgzlv\" (UID: \"428fa435-b92e-4363-82bb-40316d3e0a26\") " pod="hostpath-provisioner/csi-hostpathplugin-jgzlv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265367 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1010c304-4912-42b2-aa8c-17d44c4bf6cb-signing-cabundle\") pod \"service-ca-9c57cc56f-sxbnn\" (UID: \"1010c304-4912-42b2-aa8c-17d44c4bf6cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-sxbnn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265390 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d04de14b-8e96-44ab-818f-2b08d78d2e14-config-volume\") pod \"dns-default-7ckfm\" (UID: \"d04de14b-8e96-44ab-818f-2b08d78d2e14\") " pod="openshift-dns/dns-default-7ckfm" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265416 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbb6ff6b-d521-408a-831c-a6a9c524a671-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fjhxb\" (UID: \"dbb6ff6b-d521-408a-831c-a6a9c524a671\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjhxb" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265437 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz2n6\" (UniqueName: \"kubernetes.io/projected/0a030e24-2337-49a2-a5e2-118714cd7ff9-kube-api-access-wz2n6\") pod \"machine-config-server-8n5gl\" (UID: \"0a030e24-2337-49a2-a5e2-118714cd7ff9\") " pod="openshift-machine-config-operator/machine-config-server-8n5gl" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265474 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1010c304-4912-42b2-aa8c-17d44c4bf6cb-signing-key\") pod \"service-ca-9c57cc56f-sxbnn\" (UID: \"1010c304-4912-42b2-aa8c-17d44c4bf6cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-sxbnn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265496 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r75nz\" (UniqueName: \"kubernetes.io/projected/d04de14b-8e96-44ab-818f-2b08d78d2e14-kube-api-access-r75nz\") pod \"dns-default-7ckfm\" (UID: \"d04de14b-8e96-44ab-818f-2b08d78d2e14\") " pod="openshift-dns/dns-default-7ckfm" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265519 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d975f8-1a1e-4921-aef0-3c4652992a02-config\") pod \"etcd-operator-b45778765-w5gdn\" (UID: \"c2d975f8-1a1e-4921-aef0-3c4652992a02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5gdn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265551 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b82f2ec3-df30-4b45-be3a-9858edb2bb7f-serving-cert\") pod \"service-ca-operator-777779d784-c2cgf\" (UID: \"b82f2ec3-df30-4b45-be3a-9858edb2bb7f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c2cgf" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265576 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9f80b42-cff3-48a7-9e09-02ff65e9d9f8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jkk9s\" (UID: \"c9f80b42-cff3-48a7-9e09-02ff65e9d9f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jkk9s" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265600 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dbb6ff6b-d521-408a-831c-a6a9c524a671-metrics-tls\") pod \"ingress-operator-5b745b69d9-fjhxb\" (UID: \"dbb6ff6b-d521-408a-831c-a6a9c524a671\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjhxb" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265621 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkbfb\" (UniqueName: \"kubernetes.io/projected/e213a906-8ad6-45c1-b832-a42d58fd91c6-kube-api-access-bkbfb\") pod \"multus-admission-controller-857f4d67dd-84pdq\" (UID: \"e213a906-8ad6-45c1-b832-a42d58fd91c6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-84pdq" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265643 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9a5496e-57aa-4f42-b53d-590fb534d26e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sk8hp\" (UID: \"d9a5496e-57aa-4f42-b53d-590fb534d26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk8hp" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265662 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5612cc7f-9299-43b4-b97c-cf579a416e84-images\") pod \"machine-config-operator-74547568cd-4dp6q\" (UID: \"5612cc7f-9299-43b4-b97c-cf579a416e84\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4dp6q" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265664 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/428fa435-b92e-4363-82bb-40316d3e0a26-plugins-dir\") pod \"csi-hostpathplugin-jgzlv\" (UID: \"428fa435-b92e-4363-82bb-40316d3e0a26\") " pod="hostpath-provisioner/csi-hostpathplugin-jgzlv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265681 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbshk\" (UniqueName: \"kubernetes.io/projected/5612cc7f-9299-43b4-b97c-cf579a416e84-kube-api-access-fbshk\") pod \"machine-config-operator-74547568cd-4dp6q\" (UID: \"5612cc7f-9299-43b4-b97c-cf579a416e84\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4dp6q" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265711 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e224294-495e-4d65-96f2-8e0d2a444ef1-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7hzc8\" (UID: \"8e224294-495e-4d65-96f2-8e0d2a444ef1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7hzc8" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265730 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0a030e24-2337-49a2-a5e2-118714cd7ff9-certs\") pod \"machine-config-server-8n5gl\" (UID: \"0a030e24-2337-49a2-a5e2-118714cd7ff9\") " pod="openshift-machine-config-operator/machine-config-server-8n5gl" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265751 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27wgn\" (UniqueName: \"kubernetes.io/projected/b82f2ec3-df30-4b45-be3a-9858edb2bb7f-kube-api-access-27wgn\") pod \"service-ca-operator-777779d784-c2cgf\" (UID: \"b82f2ec3-df30-4b45-be3a-9858edb2bb7f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c2cgf" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265781 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ac46477-04bc-4d0a-b28e-b687c690dd5a-config\") pod \"kube-apiserver-operator-766d6c64bb-qwghv\" (UID: \"9ac46477-04bc-4d0a-b28e-b687c690dd5a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwghv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265802 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afbc673a-2498-49dc-b98e-d7ddc58d2999-config\") pod \"kube-controller-manager-operator-78b949d7b-mzjxx\" (UID: \"afbc673a-2498-49dc-b98e-d7ddc58d2999\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mzjxx" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265821 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a268a97-bf49-4ed6-b239-1a088c3c4e4f-cert\") pod \"ingress-canary-6w7m9\" (UID: \"8a268a97-bf49-4ed6-b239-1a088c3c4e4f\") " pod="openshift-ingress-canary/ingress-canary-6w7m9" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265844 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzkcx\" (UniqueName: \"kubernetes.io/projected/8e224294-495e-4d65-96f2-8e0d2a444ef1-kube-api-access-pzkcx\") pod \"machine-config-controller-84d6567774-7hzc8\" (UID: \"8e224294-495e-4d65-96f2-8e0d2a444ef1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7hzc8" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265866 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0a030e24-2337-49a2-a5e2-118714cd7ff9-node-bootstrap-token\") pod \"machine-config-server-8n5gl\" (UID: \"0a030e24-2337-49a2-a5e2-118714cd7ff9\") " pod="openshift-machine-config-operator/machine-config-server-8n5gl" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265887 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d975f8-1a1e-4921-aef0-3c4652992a02-serving-cert\") pod \"etcd-operator-b45778765-w5gdn\" (UID: \"c2d975f8-1a1e-4921-aef0-3c4652992a02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5gdn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265909 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5612cc7f-9299-43b4-b97c-cf579a416e84-proxy-tls\") pod \"machine-config-operator-74547568cd-4dp6q\" (UID: \"5612cc7f-9299-43b4-b97c-cf579a416e84\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4dp6q" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265929 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e224294-495e-4d65-96f2-8e0d2a444ef1-proxy-tls\") pod \"machine-config-controller-84d6567774-7hzc8\" (UID: \"8e224294-495e-4d65-96f2-8e0d2a444ef1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7hzc8" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265968 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-klbh8\" (UID: \"e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3\") " pod="openshift-marketplace/marketplace-operator-79b997595-klbh8" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.265995 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs4lb\" (UniqueName: \"kubernetes.io/projected/8a268a97-bf49-4ed6-b239-1a088c3c4e4f-kube-api-access-gs4lb\") pod \"ingress-canary-6w7m9\" (UID: \"8a268a97-bf49-4ed6-b239-1a088c3c4e4f\") " pod="openshift-ingress-canary/ingress-canary-6w7m9" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.266019 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be19fb65-a04f-42df-9b96-e620b58754bb-secret-volume\") pod \"collect-profiles-29567010-d69bc\" (UID: \"be19fb65-a04f-42df-9b96-e620b58754bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567010-d69bc" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.266042 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt52k\" (UniqueName: \"kubernetes.io/projected/f6f09179-5752-4a5a-ab79-72a176bbdd9a-kube-api-access-rt52k\") pod \"olm-operator-6b444d44fb-m7fsc\" (UID: \"f6f09179-5752-4a5a-ab79-72a176bbdd9a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7fsc" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.266064 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/428fa435-b92e-4363-82bb-40316d3e0a26-registration-dir\") pod \"csi-hostpathplugin-jgzlv\" (UID: \"428fa435-b92e-4363-82bb-40316d3e0a26\") " pod="hostpath-provisioner/csi-hostpathplugin-jgzlv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.266085 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbb6ff6b-d521-408a-831c-a6a9c524a671-trusted-ca\") pod \"ingress-operator-5b745b69d9-fjhxb\" (UID: \"dbb6ff6b-d521-408a-831c-a6a9c524a671\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjhxb" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.266107 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8nxk\" (UniqueName: \"kubernetes.io/projected/dbb6ff6b-d521-408a-831c-a6a9c524a671-kube-api-access-c8nxk\") pod \"ingress-operator-5b745b69d9-fjhxb\" (UID: \"dbb6ff6b-d521-408a-831c-a6a9c524a671\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjhxb" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.266129 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6bfj\" (UniqueName: \"kubernetes.io/projected/1010c304-4912-42b2-aa8c-17d44c4bf6cb-kube-api-access-g6bfj\") pod \"service-ca-9c57cc56f-sxbnn\" (UID: \"1010c304-4912-42b2-aa8c-17d44c4bf6cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-sxbnn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.266152 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df4fa0ea-abb1-49ea-8d74-2992c71c1a0e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vqxff\" (UID: \"df4fa0ea-abb1-49ea-8d74-2992c71c1a0e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqxff" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.266176 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f6f09179-5752-4a5a-ab79-72a176bbdd9a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-m7fsc\" (UID: \"f6f09179-5752-4a5a-ab79-72a176bbdd9a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7fsc" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.266197 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f6f09179-5752-4a5a-ab79-72a176bbdd9a-srv-cert\") pod \"olm-operator-6b444d44fb-m7fsc\" (UID: \"f6f09179-5752-4a5a-ab79-72a176bbdd9a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7fsc" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.266218 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afbc673a-2498-49dc-b98e-d7ddc58d2999-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-mzjxx\" (UID: \"afbc673a-2498-49dc-b98e-d7ddc58d2999\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mzjxx" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.266239 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46gxc\" (UniqueName: \"kubernetes.io/projected/7d87adfe-3206-4175-8d8f-5a00015cc61e-kube-api-access-46gxc\") pod \"auto-csr-approver-29567022-wf5nv\" (UID: \"7d87adfe-3206-4175-8d8f-5a00015cc61e\") " pod="openshift-infra/auto-csr-approver-29567022-wf5nv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.266297 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c2d975f8-1a1e-4921-aef0-3c4652992a02-etcd-ca\") pod \"etcd-operator-b45778765-w5gdn\" (UID: \"c2d975f8-1a1e-4921-aef0-3c4652992a02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5gdn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.266333 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj2f6\" (UniqueName: \"kubernetes.io/projected/c9f80b42-cff3-48a7-9e09-02ff65e9d9f8-kube-api-access-zj2f6\") pod \"control-plane-machine-set-operator-78cbb6b69f-jkk9s\" (UID: \"c9f80b42-cff3-48a7-9e09-02ff65e9d9f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jkk9s" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.266355 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d04de14b-8e96-44ab-818f-2b08d78d2e14-metrics-tls\") pod \"dns-default-7ckfm\" (UID: \"d04de14b-8e96-44ab-818f-2b08d78d2e14\") " pod="openshift-dns/dns-default-7ckfm" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.266381 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b82f2ec3-df30-4b45-be3a-9858edb2bb7f-config\") pod \"service-ca-operator-777779d784-c2cgf\" (UID: \"b82f2ec3-df30-4b45-be3a-9858edb2bb7f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c2cgf" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.266402 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4e2a7090-33b8-4137-be83-5c2e5ab1ccc7-profile-collector-cert\") pod \"catalog-operator-68c6474976-m7cfz\" (UID: \"4e2a7090-33b8-4137-be83-5c2e5ab1ccc7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m7cfz" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.266422 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9a5496e-57aa-4f42-b53d-590fb534d26e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sk8hp\" (UID: \"d9a5496e-57aa-4f42-b53d-590fb534d26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk8hp" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.266464 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c2d975f8-1a1e-4921-aef0-3c4652992a02-etcd-service-ca\") pod \"etcd-operator-b45778765-w5gdn\" (UID: \"c2d975f8-1a1e-4921-aef0-3c4652992a02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5gdn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.266491 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df4fa0ea-abb1-49ea-8d74-2992c71c1a0e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vqxff\" (UID: \"df4fa0ea-abb1-49ea-8d74-2992c71c1a0e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqxff" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.267187 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5612cc7f-9299-43b4-b97c-cf579a416e84-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4dp6q\" (UID: \"5612cc7f-9299-43b4-b97c-cf579a416e84\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4dp6q" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.267424 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/428fa435-b92e-4363-82bb-40316d3e0a26-socket-dir\") pod \"csi-hostpathplugin-jgzlv\" (UID: \"428fa435-b92e-4363-82bb-40316d3e0a26\") " pod="hostpath-provisioner/csi-hostpathplugin-jgzlv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.267482 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/428fa435-b92e-4363-82bb-40316d3e0a26-mountpoint-dir\") pod \"csi-hostpathplugin-jgzlv\" (UID: \"428fa435-b92e-4363-82bb-40316d3e0a26\") " pod="hostpath-provisioner/csi-hostpathplugin-jgzlv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.268333 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/428fa435-b92e-4363-82bb-40316d3e0a26-csi-data-dir\") pod \"csi-hostpathplugin-jgzlv\" (UID: \"428fa435-b92e-4363-82bb-40316d3e0a26\") " pod="hostpath-provisioner/csi-hostpathplugin-jgzlv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.268890 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df4fa0ea-abb1-49ea-8d74-2992c71c1a0e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vqxff\" (UID: \"df4fa0ea-abb1-49ea-8d74-2992c71c1a0e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqxff" Mar 20 15:42:39 crc kubenswrapper[4730]: E0320 15:42:39.269153 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:39.769139231 +0000 UTC m=+218.982510640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.269962 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1010c304-4912-42b2-aa8c-17d44c4bf6cb-signing-cabundle\") pod \"service-ca-9c57cc56f-sxbnn\" (UID: \"1010c304-4912-42b2-aa8c-17d44c4bf6cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-sxbnn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.270632 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d04de14b-8e96-44ab-818f-2b08d78d2e14-config-volume\") pod \"dns-default-7ckfm\" (UID: \"d04de14b-8e96-44ab-818f-2b08d78d2e14\") " pod="openshift-dns/dns-default-7ckfm" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.271386 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be19fb65-a04f-42df-9b96-e620b58754bb-config-volume\") pod \"collect-profiles-29567010-d69bc\" (UID: \"be19fb65-a04f-42df-9b96-e620b58754bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567010-d69bc" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.273191 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c2d975f8-1a1e-4921-aef0-3c4652992a02-etcd-client\") pod \"etcd-operator-b45778765-w5gdn\" (UID: \"c2d975f8-1a1e-4921-aef0-3c4652992a02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5gdn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.273292 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afbc673a-2498-49dc-b98e-d7ddc58d2999-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-mzjxx\" (UID: \"afbc673a-2498-49dc-b98e-d7ddc58d2999\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mzjxx" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.273320 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/428fa435-b92e-4363-82bb-40316d3e0a26-registration-dir\") pod \"csi-hostpathplugin-jgzlv\" (UID: \"428fa435-b92e-4363-82bb-40316d3e0a26\") " pod="hostpath-provisioner/csi-hostpathplugin-jgzlv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.275398 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e224294-495e-4d65-96f2-8e0d2a444ef1-proxy-tls\") pod \"machine-config-controller-84d6567774-7hzc8\" (UID: \"8e224294-495e-4d65-96f2-8e0d2a444ef1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7hzc8" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.276151 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2083343b-2ec0-4306-a0a5-f74dd0f63746-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-m4xlq\" (UID: \"2083343b-2ec0-4306-a0a5-f74dd0f63746\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4xlq" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.276869 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1010c304-4912-42b2-aa8c-17d44c4bf6cb-signing-key\") pod \"service-ca-9c57cc56f-sxbnn\" (UID: \"1010c304-4912-42b2-aa8c-17d44c4bf6cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-sxbnn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.277147 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d975f8-1a1e-4921-aef0-3c4652992a02-config\") pod \"etcd-operator-b45778765-w5gdn\" (UID: \"c2d975f8-1a1e-4921-aef0-3c4652992a02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5gdn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.277937 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f6f09179-5752-4a5a-ab79-72a176bbdd9a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-m7fsc\" (UID: \"f6f09179-5752-4a5a-ab79-72a176bbdd9a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7fsc" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.278745 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e213a906-8ad6-45c1-b832-a42d58fd91c6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-84pdq\" (UID: \"e213a906-8ad6-45c1-b832-a42d58fd91c6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-84pdq" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.278773 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f6f09179-5752-4a5a-ab79-72a176bbdd9a-srv-cert\") pod \"olm-operator-6b444d44fb-m7fsc\" (UID: \"f6f09179-5752-4a5a-ab79-72a176bbdd9a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7fsc" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.279386 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbb6ff6b-d521-408a-831c-a6a9c524a671-trusted-ca\") pod \"ingress-operator-5b745b69d9-fjhxb\" (UID: \"dbb6ff6b-d521-408a-831c-a6a9c524a671\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjhxb" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.279520 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c2d975f8-1a1e-4921-aef0-3c4652992a02-etcd-ca\") pod \"etcd-operator-b45778765-w5gdn\" (UID: \"c2d975f8-1a1e-4921-aef0-3c4652992a02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5gdn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.279704 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be19fb65-a04f-42df-9b96-e620b58754bb-secret-volume\") pod \"collect-profiles-29567010-d69bc\" (UID: \"be19fb65-a04f-42df-9b96-e620b58754bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567010-d69bc" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.280623 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9a5496e-57aa-4f42-b53d-590fb534d26e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sk8hp\" (UID: \"d9a5496e-57aa-4f42-b53d-590fb534d26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk8hp" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.281404 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4e2a7090-33b8-4137-be83-5c2e5ab1ccc7-srv-cert\") pod \"catalog-operator-68c6474976-m7cfz\" (UID: \"4e2a7090-33b8-4137-be83-5c2e5ab1ccc7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m7cfz" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.282056 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b82f2ec3-df30-4b45-be3a-9858edb2bb7f-serving-cert\") pod \"service-ca-operator-777779d784-c2cgf\" (UID: \"b82f2ec3-df30-4b45-be3a-9858edb2bb7f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c2cgf" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.285114 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ac46477-04bc-4d0a-b28e-b687c690dd5a-config\") pod \"kube-apiserver-operator-766d6c64bb-qwghv\" (UID: \"9ac46477-04bc-4d0a-b28e-b687c690dd5a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwghv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.286008 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9f80b42-cff3-48a7-9e09-02ff65e9d9f8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jkk9s\" (UID: \"c9f80b42-cff3-48a7-9e09-02ff65e9d9f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jkk9s" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.286619 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5612cc7f-9299-43b4-b97c-cf579a416e84-images\") pod \"machine-config-operator-74547568cd-4dp6q\" (UID: \"5612cc7f-9299-43b4-b97c-cf579a416e84\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4dp6q" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.286744 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-klbh8\" (UID: \"e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3\") " pod="openshift-marketplace/marketplace-operator-79b997595-klbh8" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.287421 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e224294-495e-4d65-96f2-8e0d2a444ef1-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7hzc8\" (UID: \"8e224294-495e-4d65-96f2-8e0d2a444ef1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7hzc8" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.289901 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4e2a7090-33b8-4137-be83-5c2e5ab1ccc7-profile-collector-cert\") pod \"catalog-operator-68c6474976-m7cfz\" (UID: \"4e2a7090-33b8-4137-be83-5c2e5ab1ccc7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m7cfz" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.290205 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b82f2ec3-df30-4b45-be3a-9858edb2bb7f-config\") pod \"service-ca-operator-777779d784-c2cgf\" (UID: \"b82f2ec3-df30-4b45-be3a-9858edb2bb7f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c2cgf" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.290716 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0a030e24-2337-49a2-a5e2-118714cd7ff9-certs\") pod \"machine-config-server-8n5gl\" (UID: \"0a030e24-2337-49a2-a5e2-118714cd7ff9\") " pod="openshift-machine-config-operator/machine-config-server-8n5gl" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.291644 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0a030e24-2337-49a2-a5e2-118714cd7ff9-node-bootstrap-token\") pod \"machine-config-server-8n5gl\" (UID: \"0a030e24-2337-49a2-a5e2-118714cd7ff9\") " pod="openshift-machine-config-operator/machine-config-server-8n5gl" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.295334 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9a5496e-57aa-4f42-b53d-590fb534d26e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sk8hp\" (UID: \"d9a5496e-57aa-4f42-b53d-590fb534d26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk8hp" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.296105 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c2d975f8-1a1e-4921-aef0-3c4652992a02-etcd-service-ca\") pod \"etcd-operator-b45778765-w5gdn\" (UID: \"c2d975f8-1a1e-4921-aef0-3c4652992a02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5gdn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.296581 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d04de14b-8e96-44ab-818f-2b08d78d2e14-metrics-tls\") pod \"dns-default-7ckfm\" (UID: \"d04de14b-8e96-44ab-818f-2b08d78d2e14\") " pod="openshift-dns/dns-default-7ckfm" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.296878 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df4fa0ea-abb1-49ea-8d74-2992c71c1a0e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vqxff\" (UID: \"df4fa0ea-abb1-49ea-8d74-2992c71c1a0e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqxff" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.296924 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afbc673a-2498-49dc-b98e-d7ddc58d2999-config\") pod \"kube-controller-manager-operator-78b949d7b-mzjxx\" (UID: \"afbc673a-2498-49dc-b98e-d7ddc58d2999\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mzjxx" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.297112 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-klbh8\" (UID: \"e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3\") " pod="openshift-marketplace/marketplace-operator-79b997595-klbh8" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.298407 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ac46477-04bc-4d0a-b28e-b687c690dd5a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qwghv\" (UID: \"9ac46477-04bc-4d0a-b28e-b687c690dd5a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwghv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.298586 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d975f8-1a1e-4921-aef0-3c4652992a02-serving-cert\") pod \"etcd-operator-b45778765-w5gdn\" (UID: \"c2d975f8-1a1e-4921-aef0-3c4652992a02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5gdn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.299211 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a268a97-bf49-4ed6-b239-1a088c3c4e4f-cert\") pod \"ingress-canary-6w7m9\" (UID: \"8a268a97-bf49-4ed6-b239-1a088c3c4e4f\") " pod="openshift-ingress-canary/ingress-canary-6w7m9" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.310544 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dbb6ff6b-d521-408a-831c-a6a9c524a671-metrics-tls\") pod \"ingress-operator-5b745b69d9-fjhxb\" (UID: \"dbb6ff6b-d521-408a-831c-a6a9c524a671\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjhxb" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.310905 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5612cc7f-9299-43b4-b97c-cf579a416e84-proxy-tls\") pod \"machine-config-operator-74547568cd-4dp6q\" (UID: \"5612cc7f-9299-43b4-b97c-cf579a416e84\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4dp6q" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.320224 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7wjq\" (UniqueName: \"kubernetes.io/projected/d9a5496e-57aa-4f42-b53d-590fb534d26e-kube-api-access-r7wjq\") pod \"kube-storage-version-migrator-operator-b67b599dd-sk8hp\" (UID: \"d9a5496e-57aa-4f42-b53d-590fb534d26e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk8hp" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.333811 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk8hp" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.335111 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfjm5"] Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.340280 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzvgx\" (UniqueName: \"kubernetes.io/projected/428fa435-b92e-4363-82bb-40316d3e0a26-kube-api-access-kzvgx\") pod \"csi-hostpathplugin-jgzlv\" (UID: \"428fa435-b92e-4363-82bb-40316d3e0a26\") " pod="hostpath-provisioner/csi-hostpathplugin-jgzlv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.346829 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9kgl8"] Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.353539 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df4fa0ea-abb1-49ea-8d74-2992c71c1a0e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vqxff\" (UID: \"df4fa0ea-abb1-49ea-8d74-2992c71c1a0e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqxff" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.356723 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqxff" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.370296 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:39 crc kubenswrapper[4730]: E0320 15:42:39.371314 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:39.871278066 +0000 UTC m=+219.084649435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.379217 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbb6ff6b-d521-408a-831c-a6a9c524a671-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fjhxb\" (UID: \"dbb6ff6b-d521-408a-831c-a6a9c524a671\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjhxb" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.423690 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbnh9\" (UniqueName: \"kubernetes.io/projected/be19fb65-a04f-42df-9b96-e620b58754bb-kube-api-access-wbnh9\") pod \"collect-profiles-29567010-d69bc\" (UID: \"be19fb65-a04f-42df-9b96-e620b58754bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567010-d69bc" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.430391 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz2n6\" (UniqueName: \"kubernetes.io/projected/0a030e24-2337-49a2-a5e2-118714cd7ff9-kube-api-access-wz2n6\") pod \"machine-config-server-8n5gl\" (UID: \"0a030e24-2337-49a2-a5e2-118714cd7ff9\") " pod="openshift-machine-config-operator/machine-config-server-8n5gl" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.439948 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xspkm"] Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.445341 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ac46477-04bc-4d0a-b28e-b687c690dd5a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qwghv\" (UID: \"9ac46477-04bc-4d0a-b28e-b687c690dd5a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwghv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.458609 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxgbn\" (UniqueName: \"kubernetes.io/projected/49896a92-a6b0-45ea-a736-09a368d90be4-kube-api-access-xxgbn\") pod \"migrator-59844c95c7-qxnn6\" (UID: \"49896a92-a6b0-45ea-a736-09a368d90be4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qxnn6" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.474122 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:39 crc kubenswrapper[4730]: E0320 15:42:39.474562 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:39.974547205 +0000 UTC m=+219.187918574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.476233 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj2f6\" (UniqueName: \"kubernetes.io/projected/c9f80b42-cff3-48a7-9e09-02ff65e9d9f8-kube-api-access-zj2f6\") pod \"control-plane-machine-set-operator-78cbb6b69f-jkk9s\" (UID: \"c9f80b42-cff3-48a7-9e09-02ff65e9d9f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jkk9s" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.505879 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jgzlv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.507053 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r75nz\" (UniqueName: \"kubernetes.io/projected/d04de14b-8e96-44ab-818f-2b08d78d2e14-kube-api-access-r75nz\") pod \"dns-default-7ckfm\" (UID: \"d04de14b-8e96-44ab-818f-2b08d78d2e14\") " pod="openshift-dns/dns-default-7ckfm" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.524500 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxflj\" (UniqueName: \"kubernetes.io/projected/e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3-kube-api-access-jxflj\") pod \"marketplace-operator-79b997595-klbh8\" (UID: \"e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3\") " pod="openshift-marketplace/marketplace-operator-79b997595-klbh8" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.539479 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qdqs"] Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.539511 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-nfww4"] Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.542637 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njkjz\" (UniqueName: \"kubernetes.io/projected/c2d975f8-1a1e-4921-aef0-3c4652992a02-kube-api-access-njkjz\") pod \"etcd-operator-b45778765-w5gdn\" (UID: \"c2d975f8-1a1e-4921-aef0-3c4652992a02\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w5gdn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.556167 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46gxc\" (UniqueName: \"kubernetes.io/projected/7d87adfe-3206-4175-8d8f-5a00015cc61e-kube-api-access-46gxc\") pod \"auto-csr-approver-29567022-wf5nv\" (UID: \"7d87adfe-3206-4175-8d8f-5a00015cc61e\") " pod="openshift-infra/auto-csr-approver-29567022-wf5nv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.562295 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-k6z2l"] Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.575110 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:39 crc kubenswrapper[4730]: E0320 15:42:39.576146 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:40.076120533 +0000 UTC m=+219.289491902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.578236 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afbc673a-2498-49dc-b98e-d7ddc58d2999-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-mzjxx\" (UID: \"afbc673a-2498-49dc-b98e-d7ddc58d2999\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mzjxx" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.593759 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8nxk\" (UniqueName: \"kubernetes.io/projected/dbb6ff6b-d521-408a-831c-a6a9c524a671-kube-api-access-c8nxk\") pod \"ingress-operator-5b745b69d9-fjhxb\" (UID: \"dbb6ff6b-d521-408a-831c-a6a9c524a671\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjhxb" Mar 20 15:42:39 crc kubenswrapper[4730]: W0320 15:42:39.602873 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda250c56d_72fb_473d_98ce_c013e9d15b4a.slice/crio-c78c2f0da29099b0ea2fa0b59fa2d2ed0d533a9a4639f85dd9ae55e1421c9dc4 WatchSource:0}: Error finding container c78c2f0da29099b0ea2fa0b59fa2d2ed0d533a9a4639f85dd9ae55e1421c9dc4: Status 404 returned error can't find the container with id c78c2f0da29099b0ea2fa0b59fa2d2ed0d533a9a4639f85dd9ae55e1421c9dc4 Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.603481 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mzjxx" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.608320 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwghv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.616341 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6bfj\" (UniqueName: \"kubernetes.io/projected/1010c304-4912-42b2-aa8c-17d44c4bf6cb-kube-api-access-g6bfj\") pod \"service-ca-9c57cc56f-sxbnn\" (UID: \"1010c304-4912-42b2-aa8c-17d44c4bf6cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-sxbnn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.641940 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qxnn6" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.647893 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs4lb\" (UniqueName: \"kubernetes.io/projected/8a268a97-bf49-4ed6-b239-1a088c3c4e4f-kube-api-access-gs4lb\") pod \"ingress-canary-6w7m9\" (UID: \"8a268a97-bf49-4ed6-b239-1a088c3c4e4f\") " pod="openshift-ingress-canary/ingress-canary-6w7m9" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.657837 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6cmt\" (UniqueName: \"kubernetes.io/projected/4e2a7090-33b8-4137-be83-5c2e5ab1ccc7-kube-api-access-b6cmt\") pod \"catalog-operator-68c6474976-m7cfz\" (UID: \"4e2a7090-33b8-4137-be83-5c2e5ab1ccc7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m7cfz" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.664681 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjhxb" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.672327 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hrm7z"] Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.682854 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8jhd\" (UniqueName: \"kubernetes.io/projected/2083343b-2ec0-4306-a0a5-f74dd0f63746-kube-api-access-n8jhd\") pod \"package-server-manager-789f6589d5-m4xlq\" (UID: \"2083343b-2ec0-4306-a0a5-f74dd0f63746\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4xlq" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.682899 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:39 crc kubenswrapper[4730]: E0320 15:42:39.683327 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:40.183313643 +0000 UTC m=+219.396685012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.685761 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4xlq" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.687602 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm"] Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.693207 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jkk9s" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.697638 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-92dt7" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.700089 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-w5gdn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.701964 4730 patch_prober.go:28] interesting pod/router-default-5444994796-92dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 15:42:39 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Mar 20 15:42:39 crc kubenswrapper[4730]: [+]process-running ok Mar 20 15:42:39 crc kubenswrapper[4730]: healthz check failed Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.702002 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-92dt7" podUID="18214bd2-9c3a-4737-885b-2b5c905311d8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.702630 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27wgn\" (UniqueName: \"kubernetes.io/projected/b82f2ec3-df30-4b45-be3a-9858edb2bb7f-kube-api-access-27wgn\") pod \"service-ca-operator-777779d784-c2cgf\" (UID: \"b82f2ec3-df30-4b45-be3a-9858edb2bb7f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c2cgf" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.708045 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567010-d69bc" Mar 20 15:42:39 crc kubenswrapper[4730]: W0320 15:42:39.711684 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a38d833_db72_4566_b139_7788730a502a.slice/crio-064c4bf4d5c66296dd98b82862d30c6c5583edf1808fe56690235f116a074683 WatchSource:0}: Error finding container 064c4bf4d5c66296dd98b82862d30c6c5583edf1808fe56690235f116a074683: Status 404 returned error can't find the container with id 064c4bf4d5c66296dd98b82862d30c6c5583edf1808fe56690235f116a074683 Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.713986 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt52k\" (UniqueName: \"kubernetes.io/projected/f6f09179-5752-4a5a-ab79-72a176bbdd9a-kube-api-access-rt52k\") pod \"olm-operator-6b444d44fb-m7fsc\" (UID: \"f6f09179-5752-4a5a-ab79-72a176bbdd9a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7fsc" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.714868 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-sxbnn" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.728928 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8n5gl" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.732582 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbshk\" (UniqueName: \"kubernetes.io/projected/5612cc7f-9299-43b4-b97c-cf579a416e84-kube-api-access-fbshk\") pod \"machine-config-operator-74547568cd-4dp6q\" (UID: \"5612cc7f-9299-43b4-b97c-cf579a416e84\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4dp6q" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.744891 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-klbh8" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.757165 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkbfb\" (UniqueName: \"kubernetes.io/projected/e213a906-8ad6-45c1-b832-a42d58fd91c6-kube-api-access-bkbfb\") pod \"multus-admission-controller-857f4d67dd-84pdq\" (UID: \"e213a906-8ad6-45c1-b832-a42d58fd91c6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-84pdq" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.772565 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzkcx\" (UniqueName: \"kubernetes.io/projected/8e224294-495e-4d65-96f2-8e0d2a444ef1-kube-api-access-pzkcx\") pod \"machine-config-controller-84d6567774-7hzc8\" (UID: \"8e224294-495e-4d65-96f2-8e0d2a444ef1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7hzc8" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.775741 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567022-wf5nv" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.781717 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-c2cgf" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.784575 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:39 crc kubenswrapper[4730]: E0320 15:42:39.784999 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:40.284979443 +0000 UTC m=+219.498350812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.792706 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-7ckfm" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.796819 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6w7m9" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.855650 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-g7hdt"] Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.870422 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-st79s"] Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.871686 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6rbg9"] Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.887527 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:39 crc kubenswrapper[4730]: E0320 15:42:39.888159 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:40.388147709 +0000 UTC m=+219.601519078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.913926 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mkxg7"] Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.917731 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m7cfz" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.925324 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7fsc" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.930406 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zr8dk"] Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.949072 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-84pdq" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.968550 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pncxq"] Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.976114 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4dp6q" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.982697 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7hzc8" Mar 20 15:42:39 crc kubenswrapper[4730]: I0320 15:42:39.993371 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:39 crc kubenswrapper[4730]: E0320 15:42:39.993743 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:40.49372511 +0000 UTC m=+219.707096479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.028781 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mzjxx"] Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.028859 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk8hp"] Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.096376 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:40 crc kubenswrapper[4730]: E0320 15:42:40.096971 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:40.596958928 +0000 UTC m=+219.810330297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.106775 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqxff"] Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.108119 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jgzlv"] Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.197201 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:40 crc kubenswrapper[4730]: E0320 15:42:40.197698 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:40.697677389 +0000 UTC m=+219.911048768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.198799 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:40 crc kubenswrapper[4730]: E0320 15:42:40.199113 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:40.699102223 +0000 UTC m=+219.912473592 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:40 crc kubenswrapper[4730]: W0320 15:42:40.223943 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc0c5b5_55bb_4339_8162_bb647b833006.slice/crio-50492069065d98dc384151c573905cb8667e0fb3fbd14c80994e16f59e459f10 WatchSource:0}: Error finding container 50492069065d98dc384151c573905cb8667e0fb3fbd14c80994e16f59e459f10: Status 404 returned error can't find the container with id 50492069065d98dc384151c573905cb8667e0fb3fbd14c80994e16f59e459f10 Mar 20 15:42:40 crc kubenswrapper[4730]: W0320 15:42:40.224630 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafbc673a_2498_49dc_b98e_d7ddc58d2999.slice/crio-a094a3d0363445cd7d6b98232c158d8cd5f73dfb606055f9c0cd72f2222dd487 WatchSource:0}: Error finding container a094a3d0363445cd7d6b98232c158d8cd5f73dfb606055f9c0cd72f2222dd487: Status 404 returned error can't find the container with id a094a3d0363445cd7d6b98232c158d8cd5f73dfb606055f9c0cd72f2222dd487 Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.267919 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hrm7z" event={"ID":"9a38d833-db72-4566-b139-7788730a502a","Type":"ContainerStarted","Data":"064c4bf4d5c66296dd98b82862d30c6c5583edf1808fe56690235f116a074683"} Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.269934 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-nfww4" event={"ID":"a250c56d-72fb-473d-98ce-c013e9d15b4a","Type":"ContainerStarted","Data":"c78c2f0da29099b0ea2fa0b59fa2d2ed0d533a9a4639f85dd9ae55e1421c9dc4"} Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.273586 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-k6z2l" event={"ID":"5c0e41b3-aa2d-4083-acb2-f0f68a29fcce","Type":"ContainerStarted","Data":"2b4e8f50729275ee8c790e0f8088c598e61fb52dff6ce33a0d19bea3fb8ac220"} Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.276833 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zr8dk" event={"ID":"ca0985ab-94f6-4f4e-b8b4-0ee710e01fcf","Type":"ContainerStarted","Data":"23b2fc6e3a3fdc1cf917a4f3f5e3a91ea9c0b75307f62e707b985d0eff061698"} Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.297678 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qdqs" event={"ID":"e190e098-9bc8-492f-9657-f6ccfb836f23","Type":"ContainerStarted","Data":"bd274a929342b144978e537e6ea2d49492c4e82f55a2b4b84204f310ad918116"} Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.297731 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qdqs" event={"ID":"e190e098-9bc8-492f-9657-f6ccfb836f23","Type":"ContainerStarted","Data":"a039a5bbde0f3bd6a992953622be34641890cf92f110f9d769f5ee8e166ecf29"} Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.298703 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qdqs" Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.299739 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:40 crc kubenswrapper[4730]: E0320 15:42:40.300134 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:40.800120883 +0000 UTC m=+220.013492252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.301562 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9kgl8" event={"ID":"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee","Type":"ContainerStarted","Data":"dd7ad8497736491ff86002b539ccd73a88dfd723919819049a39b445ea55904f"} Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.301600 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9kgl8" event={"ID":"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee","Type":"ContainerStarted","Data":"e011dbdf40941c9f2e1edba06bd23dad1736901c7815ace4b7b103d548c5c8d5"} Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.302597 4730 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-2qdqs container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" start-of-body= Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.302635 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qdqs" podUID="e190e098-9bc8-492f-9657-f6ccfb836f23" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.305975 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mzjxx" event={"ID":"afbc673a-2498-49dc-b98e-d7ddc58d2999","Type":"ContainerStarted","Data":"a094a3d0363445cd7d6b98232c158d8cd5f73dfb606055f9c0cd72f2222dd487"} Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.308038 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk8hp" event={"ID":"d9a5496e-57aa-4f42-b53d-590fb534d26e","Type":"ContainerStarted","Data":"0bb0e807275a1387ac8676045c734f0052cf50a2b0d54b00ca73af6fab20a832"} Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.309904 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mkxg7" event={"ID":"2eb5b4cf-be45-4c3d-abbf-9a3e525d5ed3","Type":"ContainerStarted","Data":"1c8da2a9ebf5915ca8d0df7153335b863ed32da9f11f6f81330baf3aae11e179"} Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.312728 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm" event={"ID":"2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf","Type":"ContainerStarted","Data":"89071ca70267ff55db3a1c4b03f32d342093e047d26dbfb562535cb4096d8fec"} Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.313760 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8n5gl" event={"ID":"0a030e24-2337-49a2-a5e2-118714cd7ff9","Type":"ContainerStarted","Data":"32d15dc22f29032c9176bff78f122e31694be2d1343519d8b5a6ad0a4ba97919"} Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.315731 4730 generic.go:334] "Generic (PLEG): container finished" podID="a835e0ec-4721-4824-8846-fcc7e12db3f9" containerID="d8162848222abf2d634eaafe2c6d69c1712537bbdeceea6939fe1d66ff352273" exitCode=0 Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.315908 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jzx77" event={"ID":"a835e0ec-4721-4824-8846-fcc7e12db3f9","Type":"ContainerDied","Data":"d8162848222abf2d634eaafe2c6d69c1712537bbdeceea6939fe1d66ff352273"} Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.315934 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jzx77" event={"ID":"a835e0ec-4721-4824-8846-fcc7e12db3f9","Type":"ContainerStarted","Data":"51c31820868f583075835a067e29cabb504158f08c52200ba4cb3b15dee730bd"} Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.317027 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqxff" event={"ID":"df4fa0ea-abb1-49ea-8d74-2992c71c1a0e","Type":"ContainerStarted","Data":"9214030830d3037e2a4950e8f2c2a31b6d9704ba2fb21749bb3e069ed799d152"} Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.318712 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xspkm" event={"ID":"7406e06c-cbde-48f9-b5e7-57a2a86b5a4d","Type":"ContainerStarted","Data":"8b08d94e6836d69f6385db916f7474305cf7b5dfd56a9f620596f4b3d653dbba"} Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.318738 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xspkm" event={"ID":"7406e06c-cbde-48f9-b5e7-57a2a86b5a4d","Type":"ContainerStarted","Data":"a2fc941d75512533be44af5bf426a971f659780ec57dca9c624e2dc271a196e0"} Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.319870 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-csmvr" event={"ID":"d784e9cd-d5af-496e-abca-ce30096bb0d0","Type":"ContainerStarted","Data":"13ef7c50b79282096a44f992739c37f752f93e158e231b26d98ea7e454bf1246"} Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.322307 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfjm5" event={"ID":"7d581333-2d6e-44d6-a6fc-b90c3b16baad","Type":"ContainerStarted","Data":"cb4d8a879d3b88e600163a2dbb81d34e13e7958d84c3d8cd636301338a9068d2"} Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.322338 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfjm5" event={"ID":"7d581333-2d6e-44d6-a6fc-b90c3b16baad","Type":"ContainerStarted","Data":"5c3bf7e7eb16ae676aea972ba72436827e140b23357ea8a289eaf53fce4605d2"} Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.323213 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jgzlv" event={"ID":"428fa435-b92e-4363-82bb-40316d3e0a26","Type":"ContainerStarted","Data":"0c13d15232b4262b7ee9383dc49e152107617f19e0a83f3352de9a1be82e3bfd"} Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.326354 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rbg9" event={"ID":"d4e38bce-6ae6-451b-aa9f-7a98dfa4d974","Type":"ContainerStarted","Data":"6753c2d08477eedd2a2f74ec7d764d675404bd8646157af35dce36f94613063f"} Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.327369 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pncxq" event={"ID":"0bc0c5b5-55bb-4339-8162-bb647b833006","Type":"ContainerStarted","Data":"50492069065d98dc384151c573905cb8667e0fb3fbd14c80994e16f59e459f10"} Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.330780 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-g7hdt" event={"ID":"d32c9cec-9f6c-4304-8bc9-d2e52128470a","Type":"ContainerStarted","Data":"4d56ab2bb10646020e5ffdd67001fda8bd9f2edaa6c93e2c743eb071781e1c75"} Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.338988 4730 generic.go:334] "Generic (PLEG): container finished" podID="7662a0cc-faaa-47da-90f9-f3a8907a0401" containerID="d79826854e3b3b91a0bb5b5b6da3d993958db7ba9916a9865bfac03bf08f6219" exitCode=0 Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.339062 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" event={"ID":"7662a0cc-faaa-47da-90f9-f3a8907a0401","Type":"ContainerDied","Data":"d79826854e3b3b91a0bb5b5b6da3d993958db7ba9916a9865bfac03bf08f6219"} Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.343370 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-st79s" event={"ID":"2499559b-b31f-4dab-89a0-964964dc596e","Type":"ContainerStarted","Data":"40fd46e178b8ce8c75de09a0315b49f9c04961cf890e7392083f9a7a77124dd2"} Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.347206 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldb64" event={"ID":"ad1f04f2-f7c4-4bc6-9daf-0db7a0809206","Type":"ContainerStarted","Data":"05f7c615c59cffb48afbdaba593e14531efb4fc81d8ff55c1881a4b37920563e"} Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.405753 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:40 crc kubenswrapper[4730]: E0320 15:42:40.407559 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:40.907546591 +0000 UTC m=+220.120917960 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.467782 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fjhxb"] Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.480618 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4xlq"] Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.486531 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-sxbnn"] Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.506110 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwghv"] Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.514451 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:40 crc kubenswrapper[4730]: E0320 15:42:40.515266 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:41.015234006 +0000 UTC m=+220.228605375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.552971 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-w5gdn"] Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.618505 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:40 crc kubenswrapper[4730]: E0320 15:42:40.618939 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:41.118924039 +0000 UTC m=+220.332295408 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.644814 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6w7m9"] Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.649790 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567010-d69bc"] Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.699732 4730 patch_prober.go:28] interesting pod/router-default-5444994796-92dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 15:42:40 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Mar 20 15:42:40 crc kubenswrapper[4730]: [+]process-running ok Mar 20 15:42:40 crc kubenswrapper[4730]: healthz check failed Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.700171 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-92dt7" podUID="18214bd2-9c3a-4737-885b-2b5c905311d8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.724884 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:40 crc kubenswrapper[4730]: E0320 15:42:40.725559 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:41.225528221 +0000 UTC m=+220.438899600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.728609 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-klbh8"] Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.736017 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jkk9s"] Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.748765 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qxnn6"] Mar 20 15:42:40 crc kubenswrapper[4730]: W0320 15:42:40.771965 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9f80b42_cff3_48a7_9e09_02ff65e9d9f8.slice/crio-6a8ec811ee5d2af7d6e0e366cb0349688bcb6f448f7910bdfad56add1b5d5f9a WatchSource:0}: Error finding container 6a8ec811ee5d2af7d6e0e366cb0349688bcb6f448f7910bdfad56add1b5d5f9a: Status 404 returned error can't find the container with id 6a8ec811ee5d2af7d6e0e366cb0349688bcb6f448f7910bdfad56add1b5d5f9a Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.824463 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-92dt7" podStartSLOduration=167.824439236 podStartE2EDuration="2m47.824439236s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:40.823124816 +0000 UTC m=+220.036496185" watchObservedRunningTime="2026-03-20 15:42:40.824439236 +0000 UTC m=+220.037810605" Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.827566 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:40 crc kubenswrapper[4730]: E0320 15:42:40.828043 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:41.328028587 +0000 UTC m=+220.541399956 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.840621 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-c2cgf"] Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.917730 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7fsc"] Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.925102 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567022-wf5nv"] Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.932809 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:40 crc kubenswrapper[4730]: E0320 15:42:40.933302 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:41.433282347 +0000 UTC m=+220.646653706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.945716 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7hzc8"] Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.963160 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-7ckfm"] Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.988834 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m7cfz"] Mar 20 15:42:40 crc kubenswrapper[4730]: I0320 15:42:40.989963 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-84pdq"] Mar 20 15:42:40 crc kubenswrapper[4730]: W0320 15:42:40.999380 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb82f2ec3_df30_4b45_be3a_9858edb2bb7f.slice/crio-4c90f2d398800b6f83cd9e72f53236850e7923840c593ca2e3325bc0305bd853 WatchSource:0}: Error finding container 4c90f2d398800b6f83cd9e72f53236850e7923840c593ca2e3325bc0305bd853: Status 404 returned error can't find the container with id 4c90f2d398800b6f83cd9e72f53236850e7923840c593ca2e3325bc0305bd853 Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.018769 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.031433 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4dp6q"] Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.034065 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:41 crc kubenswrapper[4730]: E0320 15:42:41.034465 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:41.534448962 +0000 UTC m=+220.747820331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.135621 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:41 crc kubenswrapper[4730]: E0320 15:42:41.135807 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:41.635779512 +0000 UTC m=+220.849150881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.135966 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:41 crc kubenswrapper[4730]: E0320 15:42:41.136285 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:41.636268347 +0000 UTC m=+220.849639716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.216467 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xspkm" podStartSLOduration=168.216437475 podStartE2EDuration="2m48.216437475s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:41.215783765 +0000 UTC m=+220.429155154" watchObservedRunningTime="2026-03-20 15:42:41.216437475 +0000 UTC m=+220.429808844" Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.237770 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:41 crc kubenswrapper[4730]: E0320 15:42:41.238055 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:41.738007929 +0000 UTC m=+220.951379468 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.238606 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:41 crc kubenswrapper[4730]: E0320 15:42:41.239154 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:41.739143634 +0000 UTC m=+220.952515023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.335750 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-csmvr" podStartSLOduration=169.335729638 podStartE2EDuration="2m49.335729638s" podCreationTimestamp="2026-03-20 15:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:41.332953703 +0000 UTC m=+220.546325072" watchObservedRunningTime="2026-03-20 15:42:41.335729638 +0000 UTC m=+220.549101007" Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.342344 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:41 crc kubenswrapper[4730]: E0320 15:42:41.342481 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:41.842460425 +0000 UTC m=+221.055831794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.342621 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:41 crc kubenswrapper[4730]: E0320 15:42:41.343012 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:41.843002402 +0000 UTC m=+221.056373771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.373673 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qdqs" podStartSLOduration=168.373652716 podStartE2EDuration="2m48.373652716s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:41.372475219 +0000 UTC m=+220.585846608" watchObservedRunningTime="2026-03-20 15:42:41.373652716 +0000 UTC m=+220.587024085" Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.373920 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-c2cgf" event={"ID":"b82f2ec3-df30-4b45-be3a-9858edb2bb7f","Type":"ContainerStarted","Data":"4c90f2d398800b6f83cd9e72f53236850e7923840c593ca2e3325bc0305bd853"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.375938 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-84pdq" event={"ID":"e213a906-8ad6-45c1-b832-a42d58fd91c6","Type":"ContainerStarted","Data":"4cf371ea7b80f4f7f2659d7fa348c114bc4ea58e993d01c8e9a6458dca12d495"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.384768 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7ckfm" event={"ID":"d04de14b-8e96-44ab-818f-2b08d78d2e14","Type":"ContainerStarted","Data":"0a47a67ec0dc19ef7601a0ba2ed0577a14eaad0157f2d05a756249d4366eb8d1"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.392017 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hrm7z" event={"ID":"9a38d833-db72-4566-b139-7788730a502a","Type":"ContainerStarted","Data":"2bd463f1b26cfe1b9822fd238c03679adad9d834d5b448d944528f6deceb749a"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.392495 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-hrm7z" Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.393150 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7fsc" event={"ID":"f6f09179-5752-4a5a-ab79-72a176bbdd9a","Type":"ContainerStarted","Data":"edfc29168003c65e0a93d4afbd0615854285eea28bcaec1f42ae6712ea29871b"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.395914 4730 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-hrm7z container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.395967 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-hrm7z" podUID="9a38d833-db72-4566-b139-7788730a502a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.418606 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4dp6q" event={"ID":"5612cc7f-9299-43b4-b97c-cf579a416e84","Type":"ContainerStarted","Data":"27c5cbe987f40fd46a08099ce914eec5acaeabef15364c0a1d3d11cb926d6f64"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.428770 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-nfww4" event={"ID":"a250c56d-72fb-473d-98ce-c013e9d15b4a","Type":"ContainerStarted","Data":"93cfd3cf713293a3037ac8a290383a0c692a549bb4e8644183f233991bbba17a"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.430618 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m7cfz" event={"ID":"4e2a7090-33b8-4137-be83-5c2e5ab1ccc7","Type":"ContainerStarted","Data":"69217f465de0b962324e83537fcde48541bfa56121b2fc7b8f8b4deb8dd0ef37"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.443629 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:41 crc kubenswrapper[4730]: E0320 15:42:41.444139 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:41.944116645 +0000 UTC m=+221.157488014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.446291 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldb64" event={"ID":"ad1f04f2-f7c4-4bc6-9daf-0db7a0809206","Type":"ContainerStarted","Data":"5e9647c566e4a2e0ff1156cd5e7eb1ef5bda3db8c35f2d9613e275de15e9262b"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.451055 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-9kgl8" podStartSLOduration=169.451034518 podStartE2EDuration="2m49.451034518s" podCreationTimestamp="2026-03-20 15:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:41.409786658 +0000 UTC m=+220.623158027" watchObservedRunningTime="2026-03-20 15:42:41.451034518 +0000 UTC m=+220.664405887" Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.452993 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-hrm7z" podStartSLOduration=168.452909066 podStartE2EDuration="2m48.452909066s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:41.45011712 +0000 UTC m=+220.663488489" watchObservedRunningTime="2026-03-20 15:42:41.452909066 +0000 UTC m=+220.666280435" Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.458350 4730 generic.go:334] "Generic (PLEG): container finished" podID="d4e38bce-6ae6-451b-aa9f-7a98dfa4d974" containerID="d66e9193c3c6c83eb2ebedb3d3324d7da8814c89e9caf33df63870d285cbd22f" exitCode=0 Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.458457 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rbg9" event={"ID":"d4e38bce-6ae6-451b-aa9f-7a98dfa4d974","Type":"ContainerDied","Data":"d66e9193c3c6c83eb2ebedb3d3324d7da8814c89e9caf33df63870d285cbd22f"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.474540 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4xlq" event={"ID":"2083343b-2ec0-4306-a0a5-f74dd0f63746","Type":"ContainerStarted","Data":"e23e2899ad0868b3e771327f197c59545128a235e8aac31bd5b6503ebfd556a0"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.490873 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567022-wf5nv" event={"ID":"7d87adfe-3206-4175-8d8f-5a00015cc61e","Type":"ContainerStarted","Data":"94b5cc48ba667f00e1531e3aeaa43c807fc5eafadb1a111b034b9b327635ca47"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.494433 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567010-d69bc" event={"ID":"be19fb65-a04f-42df-9b96-e620b58754bb","Type":"ContainerStarted","Data":"c95af936fdf0b6d7d2255a8837cf5081d1d6b867d8c027bb70bec58e5bed039e"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.496821 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ldb64" podStartSLOduration=169.496804867 podStartE2EDuration="2m49.496804867s" podCreationTimestamp="2026-03-20 15:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:41.495876699 +0000 UTC m=+220.709248078" watchObservedRunningTime="2026-03-20 15:42:41.496804867 +0000 UTC m=+220.710176236" Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.501588 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qxnn6" event={"ID":"49896a92-a6b0-45ea-a736-09a368d90be4","Type":"ContainerStarted","Data":"8eac9c0771b67ae50023fe08427b8f566bdc1e222e5bbe39d99f94e939b63049"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.506613 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-g7hdt" event={"ID":"d32c9cec-9f6c-4304-8bc9-d2e52128470a","Type":"ContainerStarted","Data":"7fc7d5b4e7ae1372876eb50a11f4bf7d7325f4335144ba6e97cc70c7eddffe35"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.507497 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-g7hdt" Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.510324 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-w5gdn" event={"ID":"c2d975f8-1a1e-4921-aef0-3c4652992a02","Type":"ContainerStarted","Data":"f6530cb3d765deb1f0299e16497ef47f75610b802a9660a518ba8c1cd5edba1e"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.510976 4730 patch_prober.go:28] interesting pod/downloads-7954f5f757-g7hdt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.511009 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-g7hdt" podUID="d32c9cec-9f6c-4304-8bc9-d2e52128470a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.514276 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7hzc8" event={"ID":"8e224294-495e-4d65-96f2-8e0d2a444ef1","Type":"ContainerStarted","Data":"7e206e562f846a228d4fb8ab07357cb1314f3c3a005d6a5daa36de361b7f0bd8"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.516466 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm" event={"ID":"2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf","Type":"ContainerStarted","Data":"c9ee68792188768f2e70a5a006ca33e1d9f850ef2974b3faf0e1e2bdb1cd6989"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.516933 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm" Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.520525 4730 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-tgpgm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.520571 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm" podUID="2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.521386 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6w7m9" event={"ID":"8a268a97-bf49-4ed6-b239-1a088c3c4e4f","Type":"ContainerStarted","Data":"b903a530879a324b343372dc3be1834fdf4c99fb4ad19202401541967a2445fd"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.521455 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6w7m9" event={"ID":"8a268a97-bf49-4ed6-b239-1a088c3c4e4f","Type":"ContainerStarted","Data":"458e89e3b66feb1609fd81aa7a06e1940ced11ee4340c232ebf9dad3fdde7647"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.565644 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:41 crc kubenswrapper[4730]: E0320 15:42:41.567413 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:42.067395341 +0000 UTC m=+221.280766790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.579270 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mkxg7" event={"ID":"2eb5b4cf-be45-4c3d-abbf-9a3e525d5ed3","Type":"ContainerStarted","Data":"c77b62f07c3e4d65f624ea3f3f47f2219b15cd19caf1b6d64b3a8a3457be9687"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.579362 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-mkxg7" Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.582132 4730 patch_prober.go:28] interesting pod/console-operator-58897d9998-mkxg7 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.582203 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-mkxg7" podUID="2eb5b4cf-be45-4c3d-abbf-9a3e525d5ed3" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.587878 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-6w7m9" podStartSLOduration=5.5878520609999995 podStartE2EDuration="5.587852061s" podCreationTimestamp="2026-03-20 15:42:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:41.582661831 +0000 UTC m=+220.796033210" watchObservedRunningTime="2026-03-20 15:42:41.587852061 +0000 UTC m=+220.801223430" Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.589623 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8n5gl" event={"ID":"0a030e24-2337-49a2-a5e2-118714cd7ff9","Type":"ContainerStarted","Data":"3b6a89a603769cc86462adb2fcdb558fe4a73266e899135349c50f340bcf3fe8"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.593457 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pncxq" event={"ID":"0bc0c5b5-55bb-4339-8162-bb647b833006","Type":"ContainerStarted","Data":"b98c1bedf76d89690ddd9448b63471ff8500717beaf329adfb8782ab79c2db62"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.612302 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwghv" event={"ID":"9ac46477-04bc-4d0a-b28e-b687c690dd5a","Type":"ContainerStarted","Data":"7debbacc86d6401f66bd6c5e9ce89d016e0a163261c2f379eb398875e46a4673"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.613092 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm" podStartSLOduration=168.613068167 podStartE2EDuration="2m48.613068167s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:41.607988491 +0000 UTC m=+220.821359860" watchObservedRunningTime="2026-03-20 15:42:41.613068167 +0000 UTC m=+220.826439536" Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.625025 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-klbh8" event={"ID":"e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3","Type":"ContainerStarted","Data":"759bd2ea27103b987add0fca450b8a256b74867157aa94299acbb52889decc8f"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.633600 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zr8dk" event={"ID":"ca0985ab-94f6-4f4e-b8b4-0ee710e01fcf","Type":"ContainerStarted","Data":"2134b158b84d7ec00d4e13bec16838b8d83c57a2b4345e768383e4b2751337be"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.635363 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjhxb" event={"ID":"dbb6ff6b-d521-408a-831c-a6a9c524a671","Type":"ContainerStarted","Data":"c3557571a15f9bef351df1be26a286f7e32e817aabc76197984264844a94269b"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.637230 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-sxbnn" event={"ID":"1010c304-4912-42b2-aa8c-17d44c4bf6cb","Type":"ContainerStarted","Data":"59c910422c78ad87a68d5f694fff37b6d90d96395fc4f18b5ca6596e497918bc"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.646428 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-k6z2l" event={"ID":"5c0e41b3-aa2d-4083-acb2-f0f68a29fcce","Type":"ContainerStarted","Data":"9832423804f3463aa221d450167df97099db90b4da825e192ef78c2beac04aa6"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.650956 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jkk9s" event={"ID":"c9f80b42-cff3-48a7-9e09-02ff65e9d9f8","Type":"ContainerStarted","Data":"6a8ec811ee5d2af7d6e0e366cb0349688bcb6f448f7910bdfad56add1b5d5f9a"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.658114 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-g7hdt" podStartSLOduration=168.658090023 podStartE2EDuration="2m48.658090023s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:41.658080703 +0000 UTC m=+220.871452072" watchObservedRunningTime="2026-03-20 15:42:41.658090023 +0000 UTC m=+220.871461392" Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.667129 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:41 crc kubenswrapper[4730]: E0320 15:42:41.668616 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:42.168591846 +0000 UTC m=+221.381963215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.682810 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfjm5" event={"ID":"7d581333-2d6e-44d6-a6fc-b90c3b16baad","Type":"ContainerStarted","Data":"41ae265ff086079d6c2cd85de5690413ab06cd5907b56c70e3f5802cb055ffdb"} Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.700636 4730 patch_prober.go:28] interesting pod/router-default-5444994796-92dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 15:42:41 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Mar 20 15:42:41 crc kubenswrapper[4730]: [+]process-running ok Mar 20 15:42:41 crc kubenswrapper[4730]: healthz check failed Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.700695 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-92dt7" podUID="18214bd2-9c3a-4737-885b-2b5c905311d8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.774722 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:41 crc kubenswrapper[4730]: E0320 15:42:41.775011 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:42.274999213 +0000 UTC m=+221.488370582 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.786641 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-8n5gl" podStartSLOduration=5.78661807 podStartE2EDuration="5.78661807s" podCreationTimestamp="2026-03-20 15:42:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:41.780893004 +0000 UTC m=+220.994264373" watchObservedRunningTime="2026-03-20 15:42:41.78661807 +0000 UTC m=+220.999989439" Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.860063 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pncxq" podStartSLOduration=168.860045061 podStartE2EDuration="2m48.860045061s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:41.857581735 +0000 UTC m=+221.070953104" watchObservedRunningTime="2026-03-20 15:42:41.860045061 +0000 UTC m=+221.073416430" Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.885470 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:41 crc kubenswrapper[4730]: E0320 15:42:41.886864 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:42.386849146 +0000 UTC m=+221.600220515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:41 crc kubenswrapper[4730]: I0320 15:42:41.987884 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:41 crc kubenswrapper[4730]: E0320 15:42:41.988518 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:42.488505606 +0000 UTC m=+221.701876975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.103634 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:42 crc kubenswrapper[4730]: E0320 15:42:42.104140 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:42.604081124 +0000 UTC m=+221.817452493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.112703 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2qdqs" Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.208141 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:42 crc kubenswrapper[4730]: E0320 15:42:42.208516 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:42.708505249 +0000 UTC m=+221.921876618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.215365 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zr8dk" podStartSLOduration=170.215345 podStartE2EDuration="2m50.215345s" podCreationTimestamp="2026-03-20 15:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:42.191342281 +0000 UTC m=+221.404713650" watchObservedRunningTime="2026-03-20 15:42:42.215345 +0000 UTC m=+221.428716369" Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.263731 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-mkxg7" podStartSLOduration=170.263710689 podStartE2EDuration="2m50.263710689s" podCreationTimestamp="2026-03-20 15:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:42.26180623 +0000 UTC m=+221.475177599" watchObservedRunningTime="2026-03-20 15:42:42.263710689 +0000 UTC m=+221.477082058" Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.312629 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:42 crc kubenswrapper[4730]: E0320 15:42:42.313144 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:42.813128001 +0000 UTC m=+222.026499370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.416654 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:42 crc kubenswrapper[4730]: E0320 15:42:42.417384 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:42.91737136 +0000 UTC m=+222.130742729 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.517359 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:42 crc kubenswrapper[4730]: E0320 15:42:42.517912 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:43.017898514 +0000 UTC m=+222.231269883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.619174 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:42 crc kubenswrapper[4730]: E0320 15:42:42.619993 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:43.119973887 +0000 UTC m=+222.333345256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.695637 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-st79s" event={"ID":"2499559b-b31f-4dab-89a0-964964dc596e","Type":"ContainerStarted","Data":"e1c21b159517c024d2850d533108f097e6b934c21c31f9514baab038d75a1db0"} Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.696227 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.698191 4730 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-st79s container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.698267 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-st79s" podUID="2499559b-b31f-4dab-89a0-964964dc596e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.698959 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-sxbnn" event={"ID":"1010c304-4912-42b2-aa8c-17d44c4bf6cb","Type":"ContainerStarted","Data":"34a828dd543a65e021600e5744d24e1c4cd7f93116bd1479dc4eaab36971fdec"} Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.701548 4730 patch_prober.go:28] interesting pod/router-default-5444994796-92dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 15:42:42 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Mar 20 15:42:42 crc kubenswrapper[4730]: [+]process-running ok Mar 20 15:42:42 crc kubenswrapper[4730]: healthz check failed Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.701604 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-92dt7" podUID="18214bd2-9c3a-4737-885b-2b5c905311d8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.702838 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mzjxx" event={"ID":"afbc673a-2498-49dc-b98e-d7ddc58d2999","Type":"ContainerStarted","Data":"0f5573acf28dc25f24f5f3fdda2ac538b23f56f1d9a72474b58a86223e9726c6"} Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.719998 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-st79s" podStartSLOduration=170.719979746 podStartE2EDuration="2m50.719979746s" podCreationTimestamp="2026-03-20 15:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:42.71914272 +0000 UTC m=+221.932514089" watchObservedRunningTime="2026-03-20 15:42:42.719979746 +0000 UTC m=+221.933351115" Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.720345 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:42 crc kubenswrapper[4730]: E0320 15:42:42.720818 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:43.220803971 +0000 UTC m=+222.434175340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.722044 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kfjm5" podStartSLOduration=170.722034539 podStartE2EDuration="2m50.722034539s" podCreationTimestamp="2026-03-20 15:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:42.436303742 +0000 UTC m=+221.649675111" watchObservedRunningTime="2026-03-20 15:42:42.722034539 +0000 UTC m=+221.935405908" Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.736972 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jzx77" event={"ID":"a835e0ec-4721-4824-8846-fcc7e12db3f9","Type":"ContainerStarted","Data":"4e940161de372f92b1b6ae2112926192c8e5439ab340f756511447e1e2aa051b"} Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.737382 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jzx77" event={"ID":"a835e0ec-4721-4824-8846-fcc7e12db3f9","Type":"ContainerStarted","Data":"431e494bdf53c4458c9d14bc444a03cea0ae1b35843b728952d5f1c5ec48d73c"} Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.749790 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7fsc" event={"ID":"f6f09179-5752-4a5a-ab79-72a176bbdd9a","Type":"ContainerStarted","Data":"7dffa997caec06e934ce49de06db4bfa2ff15d9b634cdef5fd96299b33964d8c"} Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.750841 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7fsc" Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.752358 4730 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-m7fsc container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.752536 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7fsc" podUID="f6f09179-5752-4a5a-ab79-72a176bbdd9a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.754994 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-sxbnn" podStartSLOduration=169.754978443 podStartE2EDuration="2m49.754978443s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:42.754154848 +0000 UTC m=+221.967526217" watchObservedRunningTime="2026-03-20 15:42:42.754978443 +0000 UTC m=+221.968349812" Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.775971 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jkk9s" event={"ID":"c9f80b42-cff3-48a7-9e09-02ff65e9d9f8","Type":"ContainerStarted","Data":"1c642122eaed210a012524280043fba3a122d1a17fa250fb9e9bbff43d7c0d99"} Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.786340 4730 ???:1] "http: TLS handshake error from 192.168.126.11:34230: no serving certificate available for the kubelet" Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.787202 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mzjxx" podStartSLOduration=169.787180425 podStartE2EDuration="2m49.787180425s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:42.784890794 +0000 UTC m=+221.998262163" watchObservedRunningTime="2026-03-20 15:42:42.787180425 +0000 UTC m=+222.000551794" Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.810097 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwghv" event={"ID":"9ac46477-04bc-4d0a-b28e-b687c690dd5a","Type":"ContainerStarted","Data":"95661cbf74baa8e09ad74eae6de240cfa9311adc277f9664c79b9746f96a534d"} Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.821520 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-klbh8" event={"ID":"e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3","Type":"ContainerStarted","Data":"53a23df451c80721daf3b80414ff05d019adbd298f30cf30f417f8af1c2bafc2"} Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.822475 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.847320 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-klbh8" Mar 20 15:42:42 crc kubenswrapper[4730]: E0320 15:42:42.853040 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:43.353014132 +0000 UTC m=+222.566385501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.862890 4730 ???:1] "http: TLS handshake error from 192.168.126.11:34234: no serving certificate available for the kubelet" Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.877458 4730 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-klbh8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.877577 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-klbh8" podUID="e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.879655 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7fsc" podStartSLOduration=169.879629411 podStartE2EDuration="2m49.879629411s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:42.878411224 +0000 UTC m=+222.091782603" watchObservedRunningTime="2026-03-20 15:42:42.879629411 +0000 UTC m=+222.093000780" Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.890633 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jkk9s" podStartSLOduration=169.890606429 podStartE2EDuration="2m49.890606429s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:42.820912353 +0000 UTC m=+222.034283722" watchObservedRunningTime="2026-03-20 15:42:42.890606429 +0000 UTC m=+222.103977808" Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.905774 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-c2cgf" event={"ID":"b82f2ec3-df30-4b45-be3a-9858edb2bb7f","Type":"ContainerStarted","Data":"00bfad27e02bae0f610ea7ac00033db490360b11939b34bebce351221d4b2059"} Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.921362 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqxff" event={"ID":"df4fa0ea-abb1-49ea-8d74-2992c71c1a0e","Type":"ContainerStarted","Data":"10181f70353f4226095b25118e92db2f30271e6da8103cbb61364f2175bbb612"} Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.924815 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:42 crc kubenswrapper[4730]: E0320 15:42:42.926180 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:43.426164054 +0000 UTC m=+222.639535423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.945034 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwghv" podStartSLOduration=169.945014184 podStartE2EDuration="2m49.945014184s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:42.912842724 +0000 UTC m=+222.126214093" watchObservedRunningTime="2026-03-20 15:42:42.945014184 +0000 UTC m=+222.158385553" Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.964719 4730 ???:1] "http: TLS handshake error from 192.168.126.11:34250: no serving certificate available for the kubelet" Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.968979 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-klbh8" podStartSLOduration=169.968956882 podStartE2EDuration="2m49.968956882s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:42.949224854 +0000 UTC m=+222.162596233" watchObservedRunningTime="2026-03-20 15:42:42.968956882 +0000 UTC m=+222.182328251" Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.973587 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqxff" podStartSLOduration=169.973563984 podStartE2EDuration="2m49.973563984s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:42.965881367 +0000 UTC m=+222.179252736" watchObservedRunningTime="2026-03-20 15:42:42.973563984 +0000 UTC m=+222.186935353" Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.991908 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m7cfz" event={"ID":"4e2a7090-33b8-4137-be83-5c2e5ab1ccc7","Type":"ContainerStarted","Data":"f244d7380b63cf2232e2621e28ee1118acdc16f21fb149431cc044383ad81627"} Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.992604 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m7cfz" Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.997418 4730 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-m7cfz container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Mar 20 15:42:42 crc kubenswrapper[4730]: I0320 15:42:42.997497 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m7cfz" podUID="4e2a7090-33b8-4137-be83-5c2e5ab1ccc7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:42.999399 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-w5gdn" event={"ID":"c2d975f8-1a1e-4921-aef0-3c4652992a02","Type":"ContainerStarted","Data":"54cdbc914b6362b1ea7a8bc305eb5891b8cdc477d3fd9433625eb7cbb6fd09b1"} Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:42.999572 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-c2cgf" podStartSLOduration=169.999551934 podStartE2EDuration="2m49.999551934s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:42.997382127 +0000 UTC m=+222.210753496" watchObservedRunningTime="2026-03-20 15:42:42.999551934 +0000 UTC m=+222.212923313" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.000979 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.026676 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:43 crc kubenswrapper[4730]: E0320 15:42:43.032540 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:43.532524279 +0000 UTC m=+222.745895648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.040043 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7ckfm" event={"ID":"d04de14b-8e96-44ab-818f-2b08d78d2e14","Type":"ContainerStarted","Data":"125158b546ebdfa884a38ad81603b50a265f167bb28865172900b1954094a057"} Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.075982 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m7cfz" podStartSLOduration=170.075961246 podStartE2EDuration="2m50.075961246s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:43.02769085 +0000 UTC m=+222.241062209" watchObservedRunningTime="2026-03-20 15:42:43.075961246 +0000 UTC m=+222.289332615" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.095746 4730 ???:1] "http: TLS handshake error from 192.168.126.11:34258: no serving certificate available for the kubelet" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.098453 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-w5gdn" podStartSLOduration=170.098432338 podStartE2EDuration="2m50.098432338s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:43.096616352 +0000 UTC m=+222.309987721" watchObservedRunningTime="2026-03-20 15:42:43.098432338 +0000 UTC m=+222.311803707" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.101911 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rbg9" event={"ID":"d4e38bce-6ae6-451b-aa9f-7a98dfa4d974","Type":"ContainerStarted","Data":"17c1379d3c8019412ef2eeb677c3ea1322a4e563c714928d4f30b53ddef6736c"} Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.103066 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rbg9" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.127840 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:43 crc kubenswrapper[4730]: E0320 15:42:43.129360 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:43.62934441 +0000 UTC m=+222.842715779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.156138 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4dp6q" event={"ID":"5612cc7f-9299-43b4-b97c-cf579a416e84","Type":"ContainerStarted","Data":"c4dc87db1fe52d3f1e3497425d0229f7e1b66a568f93bb998a46373961c33bd7"} Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.167811 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qxnn6" event={"ID":"49896a92-a6b0-45ea-a736-09a368d90be4","Type":"ContainerStarted","Data":"db9154d5dd76ee3ecdb3c61eccfc8d1ee9c6bf92c29bb92fa63ea79dcea49104"} Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.183569 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4xlq" event={"ID":"2083343b-2ec0-4306-a0a5-f74dd0f63746","Type":"ContainerStarted","Data":"4f7a2cb9dd6a91450eb10016bed75c95e0ee2f5ef2c55cd2382b1349dde38159"} Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.183617 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4xlq" event={"ID":"2083343b-2ec0-4306-a0a5-f74dd0f63746","Type":"ContainerStarted","Data":"6900231cf56665b242f0c9c0d06301ddbd1f4646047717504ae334d7da77ec58"} Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.184206 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4xlq" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.188704 4730 ???:1] "http: TLS handshake error from 192.168.126.11:34264: no serving certificate available for the kubelet" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.207185 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rbg9" podStartSLOduration=171.207164116 podStartE2EDuration="2m51.207164116s" podCreationTimestamp="2026-03-20 15:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:43.145343512 +0000 UTC m=+222.358714881" watchObservedRunningTime="2026-03-20 15:42:43.207164116 +0000 UTC m=+222.420535495" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.207971 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4dp6q" podStartSLOduration=170.20796343 podStartE2EDuration="2m50.20796343s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:43.205607728 +0000 UTC m=+222.418979097" watchObservedRunningTime="2026-03-20 15:42:43.20796343 +0000 UTC m=+222.421334819" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.224542 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" event={"ID":"7662a0cc-faaa-47da-90f9-f3a8907a0401","Type":"ContainerStarted","Data":"7984f06d3ad268ae2f1beedd5ac29788c74cabc76e9bb68fead3a69c2cd21f8e"} Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.225962 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qxnn6" podStartSLOduration=170.225933154 podStartE2EDuration="2m50.225933154s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:43.225415248 +0000 UTC m=+222.438786617" watchObservedRunningTime="2026-03-20 15:42:43.225933154 +0000 UTC m=+222.439304513" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.229841 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:43 crc kubenswrapper[4730]: E0320 15:42:43.230497 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:43.730466863 +0000 UTC m=+222.943838402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.242053 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7hzc8" event={"ID":"8e224294-495e-4d65-96f2-8e0d2a444ef1","Type":"ContainerStarted","Data":"997a70846ad90a701274abde3d57150a6fa780b3ae6f7e74441f6d1aa47eafbd"} Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.264556 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567010-d69bc" event={"ID":"be19fb65-a04f-42df-9b96-e620b58754bb","Type":"ContainerStarted","Data":"647092b460bb07570b06908ca4f98239d0470ba3df7bb23adf207cb830d51de7"} Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.269184 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4xlq" podStartSLOduration=170.269153994 podStartE2EDuration="2m50.269153994s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:43.26478794 +0000 UTC m=+222.478159309" watchObservedRunningTime="2026-03-20 15:42:43.269153994 +0000 UTC m=+222.482525363" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.279702 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjhxb" event={"ID":"dbb6ff6b-d521-408a-831c-a6a9c524a671","Type":"ContainerStarted","Data":"41f529b830bb386d3cb661870fe6a08af01d1525783d2c646e82019d8345f0a5"} Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.295224 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-nfww4" event={"ID":"a250c56d-72fb-473d-98ce-c013e9d15b4a","Type":"ContainerStarted","Data":"3dda7ec3cc48a8d70f976fb14be79ca042ce7984f02f652763bdc83faf96249f"} Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.304216 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7hzc8" podStartSLOduration=170.304190873 podStartE2EDuration="2m50.304190873s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:43.299421696 +0000 UTC m=+222.512793055" watchObservedRunningTime="2026-03-20 15:42:43.304190873 +0000 UTC m=+222.517562242" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.309539 4730 ???:1] "http: TLS handshake error from 192.168.126.11:34268: no serving certificate available for the kubelet" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.329625 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-k6z2l" event={"ID":"5c0e41b3-aa2d-4083-acb2-f0f68a29fcce","Type":"ContainerStarted","Data":"7f508926108469d183ad53fc1cb87f9de68c82f3dedad0d66461e40806ddf1e0"} Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.330718 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:43 crc kubenswrapper[4730]: E0320 15:42:43.330996 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:43.830976788 +0000 UTC m=+223.044348157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.332162 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:43 crc kubenswrapper[4730]: E0320 15:42:43.333079 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:43.833062992 +0000 UTC m=+223.046434361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.366628 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29567010-d69bc" podStartSLOduration=171.366609125 podStartE2EDuration="2m51.366609125s" podCreationTimestamp="2026-03-20 15:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:43.330093231 +0000 UTC m=+222.543464600" watchObservedRunningTime="2026-03-20 15:42:43.366609125 +0000 UTC m=+222.579980494" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.369032 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk8hp" event={"ID":"d9a5496e-57aa-4f42-b53d-590fb534d26e","Type":"ContainerStarted","Data":"c99c44110fd7accbd0cb3909619d6731f09534a84d37946809f5159851ac44a6"} Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.370602 4730 patch_prober.go:28] interesting pod/downloads-7954f5f757-g7hdt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.370678 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-g7hdt" podUID="d32c9cec-9f6c-4304-8bc9-d2e52128470a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.378552 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.392753 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" podStartSLOduration=170.392733729 podStartE2EDuration="2m50.392733729s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:43.370982339 +0000 UTC m=+222.584353718" watchObservedRunningTime="2026-03-20 15:42:43.392733729 +0000 UTC m=+222.606105098" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.394134 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-nfww4" podStartSLOduration=170.394129132 podStartE2EDuration="2m50.394129132s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:43.391898063 +0000 UTC m=+222.605269432" watchObservedRunningTime="2026-03-20 15:42:43.394129132 +0000 UTC m=+222.607500501" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.400408 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-hrm7z" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.400618 4730 ???:1] "http: TLS handshake error from 192.168.126.11:34270: no serving certificate available for the kubelet" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.431674 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-k6z2l" podStartSLOduration=170.431656097 podStartE2EDuration="2m50.431656097s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:43.430719279 +0000 UTC m=+222.644090638" watchObservedRunningTime="2026-03-20 15:42:43.431656097 +0000 UTC m=+222.645027456" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.433088 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sk8hp" podStartSLOduration=170.433083821 podStartE2EDuration="2m50.433083821s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:43.412231379 +0000 UTC m=+222.625602748" watchObservedRunningTime="2026-03-20 15:42:43.433083821 +0000 UTC m=+222.646455190" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.434064 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:43 crc kubenswrapper[4730]: E0320 15:42:43.435224 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:43.935206717 +0000 UTC m=+223.148578086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.463562 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjhxb" podStartSLOduration=170.463539039 podStartE2EDuration="2m50.463539039s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:43.450408535 +0000 UTC m=+222.663779904" watchObservedRunningTime="2026-03-20 15:42:43.463539039 +0000 UTC m=+222.676910408" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.518113 4730 ???:1] "http: TLS handshake error from 192.168.126.11:34278: no serving certificate available for the kubelet" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.542170 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:43 crc kubenswrapper[4730]: E0320 15:42:43.542666 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:44.042647055 +0000 UTC m=+223.256018424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.643568 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:43 crc kubenswrapper[4730]: E0320 15:42:43.643786 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:44.143754218 +0000 UTC m=+223.357125587 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.644073 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:43 crc kubenswrapper[4730]: E0320 15:42:43.644434 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:44.144422668 +0000 UTC m=+223.357794037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.700008 4730 patch_prober.go:28] interesting pod/router-default-5444994796-92dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 15:42:43 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Mar 20 15:42:43 crc kubenswrapper[4730]: [+]process-running ok Mar 20 15:42:43 crc kubenswrapper[4730]: healthz check failed Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.700066 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-92dt7" podUID="18214bd2-9c3a-4737-885b-2b5c905311d8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.745414 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:43 crc kubenswrapper[4730]: E0320 15:42:43.745538 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:44.245513401 +0000 UTC m=+223.458884770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.745622 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:43 crc kubenswrapper[4730]: E0320 15:42:43.745926 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:44.245915673 +0000 UTC m=+223.459287042 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.792672 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.792735 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.794855 4730 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-nsdw7 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.794907 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" podUID="7662a0cc-faaa-47da-90f9-f3a8907a0401" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.847373 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:43 crc kubenswrapper[4730]: E0320 15:42:43.847715 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:44.347701197 +0000 UTC m=+223.561072566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:43 crc kubenswrapper[4730]: I0320 15:42:43.948812 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:43 crc kubenswrapper[4730]: E0320 15:42:43.949237 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:44.449222313 +0000 UTC m=+223.662593682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.039577 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-mkxg7" Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.050408 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:44 crc kubenswrapper[4730]: E0320 15:42:44.050882 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:44.550859522 +0000 UTC m=+223.764230891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.152566 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:44 crc kubenswrapper[4730]: E0320 15:42:44.152943 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:44.652929484 +0000 UTC m=+223.866300863 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.200964 4730 ???:1] "http: TLS handshake error from 192.168.126.11:34054: no serving certificate available for the kubelet" Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.253841 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:44 crc kubenswrapper[4730]: E0320 15:42:44.253938 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:44.753920704 +0000 UTC m=+223.967292073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.254166 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:44 crc kubenswrapper[4730]: E0320 15:42:44.254444 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:44.75443727 +0000 UTC m=+223.967808639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.355258 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:44 crc kubenswrapper[4730]: E0320 15:42:44.355444 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:44.855416669 +0000 UTC m=+224.068788038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.355608 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:44 crc kubenswrapper[4730]: E0320 15:42:44.355974 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:44.855917684 +0000 UTC m=+224.069289053 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.384453 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qxnn6" event={"ID":"49896a92-a6b0-45ea-a736-09a368d90be4","Type":"ContainerStarted","Data":"1d9d66ac03273f78a9f6a3ad1aae37d09c47a1b7f56e03ec96fee85dd6a4bb1e"} Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.392829 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7hzc8" event={"ID":"8e224294-495e-4d65-96f2-8e0d2a444ef1","Type":"ContainerStarted","Data":"44b34c7b3becd463763d417a566e9d007f7574f3829f6b84082984b14e6b178f"} Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.401807 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-84pdq" event={"ID":"e213a906-8ad6-45c1-b832-a42d58fd91c6","Type":"ContainerStarted","Data":"c12b4267a4d26c1887f06a65d0c0eef036471fefc0bd5f1566fbd06748d28a0b"} Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.401853 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-84pdq" event={"ID":"e213a906-8ad6-45c1-b832-a42d58fd91c6","Type":"ContainerStarted","Data":"978fae8413ee06ec2c75f391646b1b65e597ee8ba09b566039d9c24d7be1cc57"} Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.404713 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jgzlv" event={"ID":"428fa435-b92e-4363-82bb-40316d3e0a26","Type":"ContainerStarted","Data":"ea678b6938b223ad3eab964671c3e3289406901e88fa28062e8548fa322ccea9"} Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.407213 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7ckfm" event={"ID":"d04de14b-8e96-44ab-818f-2b08d78d2e14","Type":"ContainerStarted","Data":"dd47e8acc78b6239c0aa5d972a2690f7f8a6960538a611883344038e4c3fafb4"} Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.407926 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-7ckfm" Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.412584 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjhxb" event={"ID":"dbb6ff6b-d521-408a-831c-a6a9c524a671","Type":"ContainerStarted","Data":"31836fab2b7c4a2c586a016e06173aaf758a2ad27b4f9986477e91cc46024853"} Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.425557 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-84pdq" podStartSLOduration=171.425541608 podStartE2EDuration="2m51.425541608s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:44.424057332 +0000 UTC m=+223.637428701" watchObservedRunningTime="2026-03-20 15:42:44.425541608 +0000 UTC m=+223.638912967" Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.425869 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4dp6q" event={"ID":"5612cc7f-9299-43b4-b97c-cf579a416e84","Type":"ContainerStarted","Data":"bb882482d0e3a4b547690fe73d4699abed241d6c74988eb9d9229503d849a173"} Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.426467 4730 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-klbh8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.426505 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-klbh8" podUID="e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.426579 4730 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-m7cfz container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.426609 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m7cfz" podUID="4e2a7090-33b8-4137-be83-5c2e5ab1ccc7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.426859 4730 patch_prober.go:28] interesting pod/downloads-7954f5f757-g7hdt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.426886 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-g7hdt" podUID="d32c9cec-9f6c-4304-8bc9-d2e52128470a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.433690 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.439373 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m7fsc" Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.458928 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:44 crc kubenswrapper[4730]: E0320 15:42:44.459904 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:44.959887955 +0000 UTC m=+224.173259324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.476974 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-7ckfm" podStartSLOduration=8.476957941 podStartE2EDuration="8.476957941s" podCreationTimestamp="2026-03-20 15:42:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:44.467819599 +0000 UTC m=+223.681190968" watchObservedRunningTime="2026-03-20 15:42:44.476957941 +0000 UTC m=+223.690329310" Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.554771 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-jzx77" podStartSLOduration=172.554754646 podStartE2EDuration="2m52.554754646s" podCreationTimestamp="2026-03-20 15:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:44.554120336 +0000 UTC m=+223.767491705" watchObservedRunningTime="2026-03-20 15:42:44.554754646 +0000 UTC m=+223.768126015" Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.566197 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:44 crc kubenswrapper[4730]: E0320 15:42:44.568654 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:45.068639093 +0000 UTC m=+224.282010462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.667789 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:44 crc kubenswrapper[4730]: E0320 15:42:44.668327 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:45.168310492 +0000 UTC m=+224.381681861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.704141 4730 patch_prober.go:28] interesting pod/router-default-5444994796-92dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 15:42:44 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Mar 20 15:42:44 crc kubenswrapper[4730]: [+]process-running ok Mar 20 15:42:44 crc kubenswrapper[4730]: healthz check failed Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.704221 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-92dt7" podUID="18214bd2-9c3a-4737-885b-2b5c905311d8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.769947 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:44 crc kubenswrapper[4730]: E0320 15:42:44.770361 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:45.270346174 +0000 UTC m=+224.483717543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.872588 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:44 crc kubenswrapper[4730]: E0320 15:42:44.872893 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:45.37287777 +0000 UTC m=+224.586249139 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:44 crc kubenswrapper[4730]: I0320 15:42:44.973938 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:44 crc kubenswrapper[4730]: E0320 15:42:44.974303 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:45.474286783 +0000 UTC m=+224.687658152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:45 crc kubenswrapper[4730]: I0320 15:42:45.010776 4730 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6rbg9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Mar 20 15:42:45 crc kubenswrapper[4730]: I0320 15:42:45.010830 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rbg9" podUID="d4e38bce-6ae6-451b-aa9f-7a98dfa4d974" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Mar 20 15:42:45 crc kubenswrapper[4730]: I0320 15:42:45.011137 4730 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6rbg9 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Mar 20 15:42:45 crc kubenswrapper[4730]: I0320 15:42:45.011170 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rbg9" podUID="d4e38bce-6ae6-451b-aa9f-7a98dfa4d974" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Mar 20 15:42:45 crc kubenswrapper[4730]: I0320 15:42:45.075306 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:45 crc kubenswrapper[4730]: E0320 15:42:45.075688 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:45.575674374 +0000 UTC m=+224.789045743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:45 crc kubenswrapper[4730]: I0320 15:42:45.123918 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hrm7z"] Mar 20 15:42:45 crc kubenswrapper[4730]: I0320 15:42:45.166458 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm"] Mar 20 15:42:45 crc kubenswrapper[4730]: I0320 15:42:45.177288 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:45 crc kubenswrapper[4730]: E0320 15:42:45.177641 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:45.677628753 +0000 UTC m=+224.891000122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:45 crc kubenswrapper[4730]: I0320 15:42:45.278213 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:45 crc kubenswrapper[4730]: E0320 15:42:45.278339 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:45.778321103 +0000 UTC m=+224.991692472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:45 crc kubenswrapper[4730]: I0320 15:42:45.278432 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:45 crc kubenswrapper[4730]: E0320 15:42:45.278739 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:45.778732196 +0000 UTC m=+224.992103565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:45 crc kubenswrapper[4730]: I0320 15:42:45.379556 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:45 crc kubenswrapper[4730]: E0320 15:42:45.380053 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:45.880035605 +0000 UTC m=+225.093406974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:45 crc kubenswrapper[4730]: I0320 15:42:45.430962 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-hrm7z" podUID="9a38d833-db72-4566-b139-7788730a502a" containerName="controller-manager" containerID="cri-o://2bd463f1b26cfe1b9822fd238c03679adad9d834d5b448d944528f6deceb749a" gracePeriod=30 Mar 20 15:42:45 crc kubenswrapper[4730]: I0320 15:42:45.432615 4730 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-klbh8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Mar 20 15:42:45 crc kubenswrapper[4730]: I0320 15:42:45.432669 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-klbh8" podUID="e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Mar 20 15:42:45 crc kubenswrapper[4730]: I0320 15:42:45.433312 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm" podUID="2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf" containerName="route-controller-manager" containerID="cri-o://c9ee68792188768f2e70a5a006ca33e1d9f850ef2974b3faf0e1e2bdb1cd6989" gracePeriod=30 Mar 20 15:42:45 crc kubenswrapper[4730]: I0320 15:42:45.449744 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m7cfz" Mar 20 15:42:45 crc kubenswrapper[4730]: I0320 15:42:45.480783 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:45 crc kubenswrapper[4730]: E0320 15:42:45.481124 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:45.981108127 +0000 UTC m=+225.194479506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:45 crc kubenswrapper[4730]: I0320 15:42:45.580278 4730 ???:1] "http: TLS handshake error from 192.168.126.11:34066: no serving certificate available for the kubelet" Mar 20 15:42:45 crc kubenswrapper[4730]: I0320 15:42:45.581309 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:45 crc kubenswrapper[4730]: E0320 15:42:45.582794 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:46.082766287 +0000 UTC m=+225.296137656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:45 crc kubenswrapper[4730]: I0320 15:42:45.683748 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:45 crc kubenswrapper[4730]: E0320 15:42:45.684268 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:46.184239461 +0000 UTC m=+225.397610830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:45 crc kubenswrapper[4730]: I0320 15:42:45.712721 4730 patch_prober.go:28] interesting pod/router-default-5444994796-92dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 15:42:45 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Mar 20 15:42:45 crc kubenswrapper[4730]: [+]process-running ok Mar 20 15:42:45 crc kubenswrapper[4730]: healthz check failed Mar 20 15:42:45 crc kubenswrapper[4730]: I0320 15:42:45.712780 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-92dt7" podUID="18214bd2-9c3a-4737-885b-2b5c905311d8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 15:42:45 crc kubenswrapper[4730]: I0320 15:42:45.785948 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:45 crc kubenswrapper[4730]: E0320 15:42:45.786314 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:46.286226071 +0000 UTC m=+225.499597460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:45 crc kubenswrapper[4730]: I0320 15:42:45.786477 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:45 crc kubenswrapper[4730]: E0320 15:42:45.787001 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:46.286993755 +0000 UTC m=+225.500365124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:45 crc kubenswrapper[4730]: I0320 15:42:45.891020 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:45 crc kubenswrapper[4730]: E0320 15:42:45.891589 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:46.391570614 +0000 UTC m=+225.604941983 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:45 crc kubenswrapper[4730]: I0320 15:42:45.993116 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:45 crc kubenswrapper[4730]: E0320 15:42:45.993408 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:46.493396879 +0000 UTC m=+225.706768248 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.069115 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.094966 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:46 crc kubenswrapper[4730]: E0320 15:42:46.095385 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:46.595369818 +0000 UTC m=+225.808741187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.197991 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf-serving-cert\") pod \"2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf\" (UID: \"2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf\") " Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.198403 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb6sw\" (UniqueName: \"kubernetes.io/projected/2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf-kube-api-access-wb6sw\") pod \"2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf\" (UID: \"2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf\") " Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.198429 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf-client-ca\") pod \"2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf\" (UID: \"2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf\") " Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.198670 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf-config\") pod \"2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf\" (UID: \"2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf\") " Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.198843 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:46 crc kubenswrapper[4730]: E0320 15:42:46.199162 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:46.699150803 +0000 UTC m=+225.912522172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.199965 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf-client-ca" (OuterVolumeSpecName: "client-ca") pod "2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf" (UID: "2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.200761 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf-config" (OuterVolumeSpecName: "config") pod "2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf" (UID: "2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.207591 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf" (UID: "2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.220721 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf-kube-api-access-wb6sw" (OuterVolumeSpecName: "kube-api-access-wb6sw") pod "2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf" (UID: "2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf"). InnerVolumeSpecName "kube-api-access-wb6sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.299867 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:46 crc kubenswrapper[4730]: E0320 15:42:46.300022 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:46.799992658 +0000 UTC m=+226.013364027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.300108 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.300208 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.300343 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb6sw\" (UniqueName: \"kubernetes.io/projected/2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf-kube-api-access-wb6sw\") on node \"crc\" DevicePath \"\"" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.300363 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.300374 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:42:46 crc kubenswrapper[4730]: E0320 15:42:46.300417 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:46.800403541 +0000 UTC m=+226.013774970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.406152 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:46 crc kubenswrapper[4730]: E0320 15:42:46.406470 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:46.906405874 +0000 UTC m=+226.119777243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.408521 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:46 crc kubenswrapper[4730]: E0320 15:42:46.408883 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:46.90887469 +0000 UTC m=+226.122246059 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.432967 4730 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6rbg9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.433028 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rbg9" podUID="d4e38bce-6ae6-451b-aa9f-7a98dfa4d974" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.445514 4730 generic.go:334] "Generic (PLEG): container finished" podID="2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf" containerID="c9ee68792188768f2e70a5a006ca33e1d9f850ef2974b3faf0e1e2bdb1cd6989" exitCode=0 Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.445585 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm" event={"ID":"2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf","Type":"ContainerDied","Data":"c9ee68792188768f2e70a5a006ca33e1d9f850ef2974b3faf0e1e2bdb1cd6989"} Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.445620 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm" event={"ID":"2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf","Type":"ContainerDied","Data":"89071ca70267ff55db3a1c4b03f32d342093e047d26dbfb562535cb4096d8fec"} Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.445638 4730 scope.go:117] "RemoveContainer" containerID="c9ee68792188768f2e70a5a006ca33e1d9f850ef2974b3faf0e1e2bdb1cd6989" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.445784 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.450738 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jgzlv" event={"ID":"428fa435-b92e-4363-82bb-40316d3e0a26","Type":"ContainerStarted","Data":"45d3b2fe2f625381a2a942dae09aff957cf42cdead6169d29b84cbb202701aaa"} Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.462298 4730 generic.go:334] "Generic (PLEG): container finished" podID="9a38d833-db72-4566-b139-7788730a502a" containerID="2bd463f1b26cfe1b9822fd238c03679adad9d834d5b448d944528f6deceb749a" exitCode=0 Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.463946 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hrm7z" event={"ID":"9a38d833-db72-4566-b139-7788730a502a","Type":"ContainerDied","Data":"2bd463f1b26cfe1b9822fd238c03679adad9d834d5b448d944528f6deceb749a"} Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.488048 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm"] Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.490476 4730 scope.go:117] "RemoveContainer" containerID="c9ee68792188768f2e70a5a006ca33e1d9f850ef2974b3faf0e1e2bdb1cd6989" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.493854 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tgpgm"] Mar 20 15:42:46 crc kubenswrapper[4730]: E0320 15:42:46.495649 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9ee68792188768f2e70a5a006ca33e1d9f850ef2974b3faf0e1e2bdb1cd6989\": container with ID starting with c9ee68792188768f2e70a5a006ca33e1d9f850ef2974b3faf0e1e2bdb1cd6989 not found: ID does not exist" containerID="c9ee68792188768f2e70a5a006ca33e1d9f850ef2974b3faf0e1e2bdb1cd6989" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.495687 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9ee68792188768f2e70a5a006ca33e1d9f850ef2974b3faf0e1e2bdb1cd6989"} err="failed to get container status \"c9ee68792188768f2e70a5a006ca33e1d9f850ef2974b3faf0e1e2bdb1cd6989\": rpc error: code = NotFound desc = could not find container \"c9ee68792188768f2e70a5a006ca33e1d9f850ef2974b3faf0e1e2bdb1cd6989\": container with ID starting with c9ee68792188768f2e70a5a006ca33e1d9f850ef2974b3faf0e1e2bdb1cd6989 not found: ID does not exist" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.511007 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:46 crc kubenswrapper[4730]: E0320 15:42:46.511239 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:47.0111898 +0000 UTC m=+226.224561169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.511309 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:46 crc kubenswrapper[4730]: E0320 15:42:46.514454 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:47.01442296 +0000 UTC m=+226.227794329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.567563 4730 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.613031 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:46 crc kubenswrapper[4730]: E0320 15:42:46.633397 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 15:42:47.133359652 +0000 UTC m=+226.346731021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.700058 4730 patch_prober.go:28] interesting pod/router-default-5444994796-92dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 15:42:46 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Mar 20 15:42:46 crc kubenswrapper[4730]: [+]process-running ok Mar 20 15:42:46 crc kubenswrapper[4730]: healthz check failed Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.700120 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-92dt7" podUID="18214bd2-9c3a-4737-885b-2b5c905311d8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.735012 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:46 crc kubenswrapper[4730]: E0320 15:42:46.735579 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 15:42:47.235565769 +0000 UTC m=+226.448937138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bdpg6" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.754020 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rlnqc"] Mar 20 15:42:46 crc kubenswrapper[4730]: E0320 15:42:46.754790 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf" containerName="route-controller-manager" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.754817 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf" containerName="route-controller-manager" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.755120 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf" containerName="route-controller-manager" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.757964 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rlnqc" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.758925 4730 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-03-20T15:42:46.567595777Z","Handler":null,"Name":""} Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.762017 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.769132 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rlnqc"] Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.792682 4730 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.792745 4730 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.836185 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.836455 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98-utilities\") pod \"certified-operators-rlnqc\" (UID: \"e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98\") " pod="openshift-marketplace/certified-operators-rlnqc" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.836549 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98-catalog-content\") pod \"certified-operators-rlnqc\" (UID: \"e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98\") " pod="openshift-marketplace/certified-operators-rlnqc" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.836596 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgmhp\" (UniqueName: \"kubernetes.io/projected/e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98-kube-api-access-rgmhp\") pod \"certified-operators-rlnqc\" (UID: \"e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98\") " pod="openshift-marketplace/certified-operators-rlnqc" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.843758 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.860695 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hrm7z" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.910936 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-59447fbd49-wdtl4"] Mar 20 15:42:46 crc kubenswrapper[4730]: E0320 15:42:46.911189 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a38d833-db72-4566-b139-7788730a502a" containerName="controller-manager" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.911207 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a38d833-db72-4566-b139-7788730a502a" containerName="controller-manager" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.911356 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a38d833-db72-4566-b139-7788730a502a" containerName="controller-manager" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.911796 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.913645 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9"] Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.914196 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.922775 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.923059 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.923201 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.923380 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.923502 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.925973 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.931377 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9"] Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.935848 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59447fbd49-wdtl4"] Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.937102 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq5zm\" (UniqueName: \"kubernetes.io/projected/9a38d833-db72-4566-b139-7788730a502a-kube-api-access-tq5zm\") pod \"9a38d833-db72-4566-b139-7788730a502a\" (UID: \"9a38d833-db72-4566-b139-7788730a502a\") " Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.937239 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a38d833-db72-4566-b139-7788730a502a-serving-cert\") pod \"9a38d833-db72-4566-b139-7788730a502a\" (UID: \"9a38d833-db72-4566-b139-7788730a502a\") " Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.937292 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a38d833-db72-4566-b139-7788730a502a-config\") pod \"9a38d833-db72-4566-b139-7788730a502a\" (UID: \"9a38d833-db72-4566-b139-7788730a502a\") " Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.937330 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a38d833-db72-4566-b139-7788730a502a-client-ca\") pod \"9a38d833-db72-4566-b139-7788730a502a\" (UID: \"9a38d833-db72-4566-b139-7788730a502a\") " Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.937378 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a38d833-db72-4566-b139-7788730a502a-proxy-ca-bundles\") pod \"9a38d833-db72-4566-b139-7788730a502a\" (UID: \"9a38d833-db72-4566-b139-7788730a502a\") " Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.937534 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgmhp\" (UniqueName: \"kubernetes.io/projected/e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98-kube-api-access-rgmhp\") pod \"certified-operators-rlnqc\" (UID: \"e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98\") " pod="openshift-marketplace/certified-operators-rlnqc" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.937562 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.937587 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98-utilities\") pod \"certified-operators-rlnqc\" (UID: \"e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98\") " pod="openshift-marketplace/certified-operators-rlnqc" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.937684 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98-catalog-content\") pod \"certified-operators-rlnqc\" (UID: \"e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98\") " pod="openshift-marketplace/certified-operators-rlnqc" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.938358 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98-catalog-content\") pod \"certified-operators-rlnqc\" (UID: \"e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98\") " pod="openshift-marketplace/certified-operators-rlnqc" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.938733 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98-utilities\") pod \"certified-operators-rlnqc\" (UID: \"e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98\") " pod="openshift-marketplace/certified-operators-rlnqc" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.939802 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a38d833-db72-4566-b139-7788730a502a-client-ca" (OuterVolumeSpecName: "client-ca") pod "9a38d833-db72-4566-b139-7788730a502a" (UID: "9a38d833-db72-4566-b139-7788730a502a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.940000 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a38d833-db72-4566-b139-7788730a502a-config" (OuterVolumeSpecName: "config") pod "9a38d833-db72-4566-b139-7788730a502a" (UID: "9a38d833-db72-4566-b139-7788730a502a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.940380 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a38d833-db72-4566-b139-7788730a502a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9a38d833-db72-4566-b139-7788730a502a" (UID: "9a38d833-db72-4566-b139-7788730a502a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.953239 4730 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.953653 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.953688 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a38d833-db72-4566-b139-7788730a502a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9a38d833-db72-4566-b139-7788730a502a" (UID: "9a38d833-db72-4566-b139-7788730a502a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.957569 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a38d833-db72-4566-b139-7788730a502a-kube-api-access-tq5zm" (OuterVolumeSpecName: "kube-api-access-tq5zm") pod "9a38d833-db72-4566-b139-7788730a502a" (UID: "9a38d833-db72-4566-b139-7788730a502a"). InnerVolumeSpecName "kube-api-access-tq5zm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.960230 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgmhp\" (UniqueName: \"kubernetes.io/projected/e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98-kube-api-access-rgmhp\") pod \"certified-operators-rlnqc\" (UID: \"e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98\") " pod="openshift-marketplace/certified-operators-rlnqc" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.968602 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mbtfk"] Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.969667 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mbtfk" Mar 20 15:42:46 crc kubenswrapper[4730]: I0320 15:42:46.987043 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.014289 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mbtfk"] Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.038894 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bdpg6\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.040616 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgkk5\" (UniqueName: \"kubernetes.io/projected/be49b904-0667-4d74-ac81-e84600f0835e-kube-api-access-dgkk5\") pod \"route-controller-manager-58875cfd6f-xthh9\" (UID: \"be49b904-0667-4d74-ac81-e84600f0835e\") " pod="openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.041057 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7422509-bd52-437b-9459-9c715c66fc33-proxy-ca-bundles\") pod \"controller-manager-59447fbd49-wdtl4\" (UID: \"d7422509-bd52-437b-9459-9c715c66fc33\") " pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.041242 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2755g\" (UniqueName: \"kubernetes.io/projected/d7422509-bd52-437b-9459-9c715c66fc33-kube-api-access-2755g\") pod \"controller-manager-59447fbd49-wdtl4\" (UID: \"d7422509-bd52-437b-9459-9c715c66fc33\") " pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.041634 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd6js\" (UniqueName: \"kubernetes.io/projected/d5addb8e-1dbc-41a2-8330-8a97251bd52f-kube-api-access-rd6js\") pod \"community-operators-mbtfk\" (UID: \"d5addb8e-1dbc-41a2-8330-8a97251bd52f\") " pod="openshift-marketplace/community-operators-mbtfk" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.041693 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be49b904-0667-4d74-ac81-e84600f0835e-serving-cert\") pod \"route-controller-manager-58875cfd6f-xthh9\" (UID: \"be49b904-0667-4d74-ac81-e84600f0835e\") " pod="openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.041809 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7422509-bd52-437b-9459-9c715c66fc33-client-ca\") pod \"controller-manager-59447fbd49-wdtl4\" (UID: \"d7422509-bd52-437b-9459-9c715c66fc33\") " pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.041834 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be49b904-0667-4d74-ac81-e84600f0835e-client-ca\") pod \"route-controller-manager-58875cfd6f-xthh9\" (UID: \"be49b904-0667-4d74-ac81-e84600f0835e\") " pod="openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.041884 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7422509-bd52-437b-9459-9c715c66fc33-serving-cert\") pod \"controller-manager-59447fbd49-wdtl4\" (UID: \"d7422509-bd52-437b-9459-9c715c66fc33\") " pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.041978 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5addb8e-1dbc-41a2-8330-8a97251bd52f-catalog-content\") pod \"community-operators-mbtfk\" (UID: \"d5addb8e-1dbc-41a2-8330-8a97251bd52f\") " pod="openshift-marketplace/community-operators-mbtfk" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.042023 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5addb8e-1dbc-41a2-8330-8a97251bd52f-utilities\") pod \"community-operators-mbtfk\" (UID: \"d5addb8e-1dbc-41a2-8330-8a97251bd52f\") " pod="openshift-marketplace/community-operators-mbtfk" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.042039 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7422509-bd52-437b-9459-9c715c66fc33-config\") pod \"controller-manager-59447fbd49-wdtl4\" (UID: \"d7422509-bd52-437b-9459-9c715c66fc33\") " pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.042122 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be49b904-0667-4d74-ac81-e84600f0835e-config\") pod \"route-controller-manager-58875cfd6f-xthh9\" (UID: \"be49b904-0667-4d74-ac81-e84600f0835e\") " pod="openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.042184 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a38d833-db72-4566-b139-7788730a502a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.042195 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a38d833-db72-4566-b139-7788730a502a-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.042203 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a38d833-db72-4566-b139-7788730a502a-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.042212 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a38d833-db72-4566-b139-7788730a502a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.042221 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tq5zm\" (UniqueName: \"kubernetes.io/projected/9a38d833-db72-4566-b139-7788730a502a-kube-api-access-tq5zm\") on node \"crc\" DevicePath \"\"" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.052857 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.145953 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgkk5\" (UniqueName: \"kubernetes.io/projected/be49b904-0667-4d74-ac81-e84600f0835e-kube-api-access-dgkk5\") pod \"route-controller-manager-58875cfd6f-xthh9\" (UID: \"be49b904-0667-4d74-ac81-e84600f0835e\") " pod="openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.146007 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7422509-bd52-437b-9459-9c715c66fc33-proxy-ca-bundles\") pod \"controller-manager-59447fbd49-wdtl4\" (UID: \"d7422509-bd52-437b-9459-9c715c66fc33\") " pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.146052 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2755g\" (UniqueName: \"kubernetes.io/projected/d7422509-bd52-437b-9459-9c715c66fc33-kube-api-access-2755g\") pod \"controller-manager-59447fbd49-wdtl4\" (UID: \"d7422509-bd52-437b-9459-9c715c66fc33\") " pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.146077 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd6js\" (UniqueName: \"kubernetes.io/projected/d5addb8e-1dbc-41a2-8330-8a97251bd52f-kube-api-access-rd6js\") pod \"community-operators-mbtfk\" (UID: \"d5addb8e-1dbc-41a2-8330-8a97251bd52f\") " pod="openshift-marketplace/community-operators-mbtfk" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.146127 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be49b904-0667-4d74-ac81-e84600f0835e-serving-cert\") pod \"route-controller-manager-58875cfd6f-xthh9\" (UID: \"be49b904-0667-4d74-ac81-e84600f0835e\") " pod="openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.146166 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7422509-bd52-437b-9459-9c715c66fc33-client-ca\") pod \"controller-manager-59447fbd49-wdtl4\" (UID: \"d7422509-bd52-437b-9459-9c715c66fc33\") " pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.146191 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be49b904-0667-4d74-ac81-e84600f0835e-client-ca\") pod \"route-controller-manager-58875cfd6f-xthh9\" (UID: \"be49b904-0667-4d74-ac81-e84600f0835e\") " pod="openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.146232 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7422509-bd52-437b-9459-9c715c66fc33-serving-cert\") pod \"controller-manager-59447fbd49-wdtl4\" (UID: \"d7422509-bd52-437b-9459-9c715c66fc33\") " pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.146275 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5addb8e-1dbc-41a2-8330-8a97251bd52f-catalog-content\") pod \"community-operators-mbtfk\" (UID: \"d5addb8e-1dbc-41a2-8330-8a97251bd52f\") " pod="openshift-marketplace/community-operators-mbtfk" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.146315 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5addb8e-1dbc-41a2-8330-8a97251bd52f-utilities\") pod \"community-operators-mbtfk\" (UID: \"d5addb8e-1dbc-41a2-8330-8a97251bd52f\") " pod="openshift-marketplace/community-operators-mbtfk" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.146339 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7422509-bd52-437b-9459-9c715c66fc33-config\") pod \"controller-manager-59447fbd49-wdtl4\" (UID: \"d7422509-bd52-437b-9459-9c715c66fc33\") " pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.146361 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be49b904-0667-4d74-ac81-e84600f0835e-config\") pod \"route-controller-manager-58875cfd6f-xthh9\" (UID: \"be49b904-0667-4d74-ac81-e84600f0835e\") " pod="openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.148070 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5addb8e-1dbc-41a2-8330-8a97251bd52f-catalog-content\") pod \"community-operators-mbtfk\" (UID: \"d5addb8e-1dbc-41a2-8330-8a97251bd52f\") " pod="openshift-marketplace/community-operators-mbtfk" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.148146 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7422509-bd52-437b-9459-9c715c66fc33-client-ca\") pod \"controller-manager-59447fbd49-wdtl4\" (UID: \"d7422509-bd52-437b-9459-9c715c66fc33\") " pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.148395 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be49b904-0667-4d74-ac81-e84600f0835e-config\") pod \"route-controller-manager-58875cfd6f-xthh9\" (UID: \"be49b904-0667-4d74-ac81-e84600f0835e\") " pod="openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.148444 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5addb8e-1dbc-41a2-8330-8a97251bd52f-utilities\") pod \"community-operators-mbtfk\" (UID: \"d5addb8e-1dbc-41a2-8330-8a97251bd52f\") " pod="openshift-marketplace/community-operators-mbtfk" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.148766 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be49b904-0667-4d74-ac81-e84600f0835e-client-ca\") pod \"route-controller-manager-58875cfd6f-xthh9\" (UID: \"be49b904-0667-4d74-ac81-e84600f0835e\") " pod="openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.150284 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7422509-bd52-437b-9459-9c715c66fc33-config\") pod \"controller-manager-59447fbd49-wdtl4\" (UID: \"d7422509-bd52-437b-9459-9c715c66fc33\") " pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.150435 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7422509-bd52-437b-9459-9c715c66fc33-proxy-ca-bundles\") pod \"controller-manager-59447fbd49-wdtl4\" (UID: \"d7422509-bd52-437b-9459-9c715c66fc33\") " pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.152564 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rlnqc" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.166661 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7422509-bd52-437b-9459-9c715c66fc33-serving-cert\") pod \"controller-manager-59447fbd49-wdtl4\" (UID: \"d7422509-bd52-437b-9459-9c715c66fc33\") " pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.166661 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be49b904-0667-4d74-ac81-e84600f0835e-serving-cert\") pod \"route-controller-manager-58875cfd6f-xthh9\" (UID: \"be49b904-0667-4d74-ac81-e84600f0835e\") " pod="openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.169356 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6mppz"] Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.170367 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6mppz" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.171221 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgkk5\" (UniqueName: \"kubernetes.io/projected/be49b904-0667-4d74-ac81-e84600f0835e-kube-api-access-dgkk5\") pod \"route-controller-manager-58875cfd6f-xthh9\" (UID: \"be49b904-0667-4d74-ac81-e84600f0835e\") " pod="openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.180265 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2755g\" (UniqueName: \"kubernetes.io/projected/d7422509-bd52-437b-9459-9c715c66fc33-kube-api-access-2755g\") pod \"controller-manager-59447fbd49-wdtl4\" (UID: \"d7422509-bd52-437b-9459-9c715c66fc33\") " pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.181114 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd6js\" (UniqueName: \"kubernetes.io/projected/d5addb8e-1dbc-41a2-8330-8a97251bd52f-kube-api-access-rd6js\") pod \"community-operators-mbtfk\" (UID: \"d5addb8e-1dbc-41a2-8330-8a97251bd52f\") " pod="openshift-marketplace/community-operators-mbtfk" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.186530 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6mppz"] Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.247631 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/168c4cbd-3a44-48a5-be95-0eb4ea01d6c8-catalog-content\") pod \"certified-operators-6mppz\" (UID: \"168c4cbd-3a44-48a5-be95-0eb4ea01d6c8\") " pod="openshift-marketplace/certified-operators-6mppz" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.247685 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwqvn\" (UniqueName: \"kubernetes.io/projected/168c4cbd-3a44-48a5-be95-0eb4ea01d6c8-kube-api-access-vwqvn\") pod \"certified-operators-6mppz\" (UID: \"168c4cbd-3a44-48a5-be95-0eb4ea01d6c8\") " pod="openshift-marketplace/certified-operators-6mppz" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.247718 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/168c4cbd-3a44-48a5-be95-0eb4ea01d6c8-utilities\") pod \"certified-operators-6mppz\" (UID: \"168c4cbd-3a44-48a5-be95-0eb4ea01d6c8\") " pod="openshift-marketplace/certified-operators-6mppz" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.311986 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.312006 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.313227 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bdpg6"] Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.360865 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/168c4cbd-3a44-48a5-be95-0eb4ea01d6c8-catalog-content\") pod \"certified-operators-6mppz\" (UID: \"168c4cbd-3a44-48a5-be95-0eb4ea01d6c8\") " pod="openshift-marketplace/certified-operators-6mppz" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.360975 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwqvn\" (UniqueName: \"kubernetes.io/projected/168c4cbd-3a44-48a5-be95-0eb4ea01d6c8-kube-api-access-vwqvn\") pod \"certified-operators-6mppz\" (UID: \"168c4cbd-3a44-48a5-be95-0eb4ea01d6c8\") " pod="openshift-marketplace/certified-operators-6mppz" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.361028 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/168c4cbd-3a44-48a5-be95-0eb4ea01d6c8-utilities\") pod \"certified-operators-6mppz\" (UID: \"168c4cbd-3a44-48a5-be95-0eb4ea01d6c8\") " pod="openshift-marketplace/certified-operators-6mppz" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.363960 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mbtfk" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.364915 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/168c4cbd-3a44-48a5-be95-0eb4ea01d6c8-utilities\") pod \"certified-operators-6mppz\" (UID: \"168c4cbd-3a44-48a5-be95-0eb4ea01d6c8\") " pod="openshift-marketplace/certified-operators-6mppz" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.365151 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/168c4cbd-3a44-48a5-be95-0eb4ea01d6c8-catalog-content\") pod \"certified-operators-6mppz\" (UID: \"168c4cbd-3a44-48a5-be95-0eb4ea01d6c8\") " pod="openshift-marketplace/certified-operators-6mppz" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.375177 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cx74p"] Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.376206 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cx74p" Mar 20 15:42:47 crc kubenswrapper[4730]: W0320 15:42:47.384797 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54319e7c_f09f_4f3d_80cd_8d6dcd4ef88e.slice/crio-2585ec9b7501bfbd7ee8fe496b92574cacdd42248f1dc5fb5a3a27e92aa35714 WatchSource:0}: Error finding container 2585ec9b7501bfbd7ee8fe496b92574cacdd42248f1dc5fb5a3a27e92aa35714: Status 404 returned error can't find the container with id 2585ec9b7501bfbd7ee8fe496b92574cacdd42248f1dc5fb5a3a27e92aa35714 Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.400307 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cx74p"] Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.417445 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwqvn\" (UniqueName: \"kubernetes.io/projected/168c4cbd-3a44-48a5-be95-0eb4ea01d6c8-kube-api-access-vwqvn\") pod \"certified-operators-6mppz\" (UID: \"168c4cbd-3a44-48a5-be95-0eb4ea01d6c8\") " pod="openshift-marketplace/certified-operators-6mppz" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.462243 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a118148-49cc-4b61-bb43-44e3ef2c3048-catalog-content\") pod \"community-operators-cx74p\" (UID: \"7a118148-49cc-4b61-bb43-44e3ef2c3048\") " pod="openshift-marketplace/community-operators-cx74p" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.462342 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5sxl\" (UniqueName: \"kubernetes.io/projected/7a118148-49cc-4b61-bb43-44e3ef2c3048-kube-api-access-v5sxl\") pod \"community-operators-cx74p\" (UID: \"7a118148-49cc-4b61-bb43-44e3ef2c3048\") " pod="openshift-marketplace/community-operators-cx74p" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.462623 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a118148-49cc-4b61-bb43-44e3ef2c3048-utilities\") pod \"community-operators-cx74p\" (UID: \"7a118148-49cc-4b61-bb43-44e3ef2c3048\") " pod="openshift-marketplace/community-operators-cx74p" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.470276 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" event={"ID":"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e","Type":"ContainerStarted","Data":"2585ec9b7501bfbd7ee8fe496b92574cacdd42248f1dc5fb5a3a27e92aa35714"} Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.476424 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hrm7z" event={"ID":"9a38d833-db72-4566-b139-7788730a502a","Type":"ContainerDied","Data":"064c4bf4d5c66296dd98b82862d30c6c5583edf1808fe56690235f116a074683"} Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.476465 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hrm7z" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.476488 4730 scope.go:117] "RemoveContainer" containerID="2bd463f1b26cfe1b9822fd238c03679adad9d834d5b448d944528f6deceb749a" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.481267 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rlnqc"] Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.497302 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6mppz" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.497534 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jgzlv" event={"ID":"428fa435-b92e-4363-82bb-40316d3e0a26","Type":"ContainerStarted","Data":"67cb28a2ee07231be998e2f13dedddf5bf377cdc97da819cac1747f44545d3be"} Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.497585 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jgzlv" event={"ID":"428fa435-b92e-4363-82bb-40316d3e0a26","Type":"ContainerStarted","Data":"79a845f4d5200a2feec2b08c9e542e7fa3735695ac0ac59d83362c07fa8ae895"} Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.527215 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-jgzlv" podStartSLOduration=11.527194332 podStartE2EDuration="11.527194332s" podCreationTimestamp="2026-03-20 15:42:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:47.515580784 +0000 UTC m=+226.728952153" watchObservedRunningTime="2026-03-20 15:42:47.527194332 +0000 UTC m=+226.740565711" Mar 20 15:42:47 crc kubenswrapper[4730]: W0320 15:42:47.539784 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1e9bea0_2eab_4ac3_ae73_6ad7bf4d7a98.slice/crio-3040ee2962d5bbb55e0748bc4e87b0b5953d222ba1e8e8b464bf1b4d1408cbfd WatchSource:0}: Error finding container 3040ee2962d5bbb55e0748bc4e87b0b5953d222ba1e8e8b464bf1b4d1408cbfd: Status 404 returned error can't find the container with id 3040ee2962d5bbb55e0748bc4e87b0b5953d222ba1e8e8b464bf1b4d1408cbfd Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.561743 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf" path="/var/lib/kubelet/pods/2b0a6f23-3cb9-4313-b5c0-78f7c0ffafcf/volumes" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.563051 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.563673 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hrm7z"] Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.563704 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hrm7z"] Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.566995 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a118148-49cc-4b61-bb43-44e3ef2c3048-utilities\") pod \"community-operators-cx74p\" (UID: \"7a118148-49cc-4b61-bb43-44e3ef2c3048\") " pod="openshift-marketplace/community-operators-cx74p" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.567051 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a118148-49cc-4b61-bb43-44e3ef2c3048-catalog-content\") pod \"community-operators-cx74p\" (UID: \"7a118148-49cc-4b61-bb43-44e3ef2c3048\") " pod="openshift-marketplace/community-operators-cx74p" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.567113 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5sxl\" (UniqueName: \"kubernetes.io/projected/7a118148-49cc-4b61-bb43-44e3ef2c3048-kube-api-access-v5sxl\") pod \"community-operators-cx74p\" (UID: \"7a118148-49cc-4b61-bb43-44e3ef2c3048\") " pod="openshift-marketplace/community-operators-cx74p" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.568870 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a118148-49cc-4b61-bb43-44e3ef2c3048-catalog-content\") pod \"community-operators-cx74p\" (UID: \"7a118148-49cc-4b61-bb43-44e3ef2c3048\") " pod="openshift-marketplace/community-operators-cx74p" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.570764 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a118148-49cc-4b61-bb43-44e3ef2c3048-utilities\") pod \"community-operators-cx74p\" (UID: \"7a118148-49cc-4b61-bb43-44e3ef2c3048\") " pod="openshift-marketplace/community-operators-cx74p" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.604789 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5sxl\" (UniqueName: \"kubernetes.io/projected/7a118148-49cc-4b61-bb43-44e3ef2c3048-kube-api-access-v5sxl\") pod \"community-operators-cx74p\" (UID: \"7a118148-49cc-4b61-bb43-44e3ef2c3048\") " pod="openshift-marketplace/community-operators-cx74p" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.656470 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9"] Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.716464 4730 patch_prober.go:28] interesting pod/router-default-5444994796-92dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 15:42:47 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Mar 20 15:42:47 crc kubenswrapper[4730]: [+]process-running ok Mar 20 15:42:47 crc kubenswrapper[4730]: healthz check failed Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.716516 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-92dt7" podUID="18214bd2-9c3a-4737-885b-2b5c905311d8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.726519 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cx74p" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.835315 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mbtfk"] Mar 20 15:42:47 crc kubenswrapper[4730]: W0320 15:42:47.849327 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5addb8e_1dbc_41a2_8330_8a97251bd52f.slice/crio-542af9b157419c81e78bae4e6b0035cde152a688b5cf204d5b3fe35157527e94 WatchSource:0}: Error finding container 542af9b157419c81e78bae4e6b0035cde152a688b5cf204d5b3fe35157527e94: Status 404 returned error can't find the container with id 542af9b157419c81e78bae4e6b0035cde152a688b5cf204d5b3fe35157527e94 Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.991159 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.991934 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.996392 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 20 15:42:47 crc kubenswrapper[4730]: I0320 15:42:47.997156 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.038333 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59447fbd49-wdtl4"] Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.038810 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rbg9" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.043380 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.075476 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cx74p"] Mar 20 15:42:48 crc kubenswrapper[4730]: W0320 15:42:48.076167 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7422509_bd52_437b_9459_9c715c66fc33.slice/crio-c34933180d93ae0c413fe4e3ba797162b90bde253f6503dcbf0ea4405517bfca WatchSource:0}: Error finding container c34933180d93ae0c413fe4e3ba797162b90bde253f6503dcbf0ea4405517bfca: Status 404 returned error can't find the container with id c34933180d93ae0c413fe4e3ba797162b90bde253f6503dcbf0ea4405517bfca Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.079677 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed812b57-bdba-4cd0-be71-859fe5d52eba-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ed812b57-bdba-4cd0-be71-859fe5d52eba\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.079851 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed812b57-bdba-4cd0-be71-859fe5d52eba-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ed812b57-bdba-4cd0-be71-859fe5d52eba\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.163655 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6mppz"] Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.179564 4730 ???:1] "http: TLS handshake error from 192.168.126.11:34080: no serving certificate available for the kubelet" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.181348 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed812b57-bdba-4cd0-be71-859fe5d52eba-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ed812b57-bdba-4cd0-be71-859fe5d52eba\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.181448 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed812b57-bdba-4cd0-be71-859fe5d52eba-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ed812b57-bdba-4cd0-be71-859fe5d52eba\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.182225 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed812b57-bdba-4cd0-be71-859fe5d52eba-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ed812b57-bdba-4cd0-be71-859fe5d52eba\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.203658 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed812b57-bdba-4cd0-be71-859fe5d52eba-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ed812b57-bdba-4cd0-be71-859fe5d52eba\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.294462 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.295224 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.297381 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.297512 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.307854 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.319569 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.384619 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23069a50-0f37-4d67-8cfd-e7a569cc6c92-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"23069a50-0f37-4d67-8cfd-e7a569cc6c92\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.384799 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23069a50-0f37-4d67-8cfd-e7a569cc6c92-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"23069a50-0f37-4d67-8cfd-e7a569cc6c92\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.490132 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23069a50-0f37-4d67-8cfd-e7a569cc6c92-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"23069a50-0f37-4d67-8cfd-e7a569cc6c92\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.490858 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23069a50-0f37-4d67-8cfd-e7a569cc6c92-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"23069a50-0f37-4d67-8cfd-e7a569cc6c92\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.497335 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23069a50-0f37-4d67-8cfd-e7a569cc6c92-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"23069a50-0f37-4d67-8cfd-e7a569cc6c92\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.511651 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23069a50-0f37-4d67-8cfd-e7a569cc6c92-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"23069a50-0f37-4d67-8cfd-e7a569cc6c92\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.526518 4730 generic.go:334] "Generic (PLEG): container finished" podID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" containerID="0936786d2af592781681255288d0bc8bfa0e6ea172412747ed1615c053176e9a" exitCode=0 Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.526605 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlnqc" event={"ID":"e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98","Type":"ContainerDied","Data":"0936786d2af592781681255288d0bc8bfa0e6ea172412747ed1615c053176e9a"} Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.526633 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlnqc" event={"ID":"e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98","Type":"ContainerStarted","Data":"3040ee2962d5bbb55e0748bc4e87b0b5953d222ba1e8e8b464bf1b4d1408cbfd"} Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.532774 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" event={"ID":"d7422509-bd52-437b-9459-9c715c66fc33","Type":"ContainerStarted","Data":"97c7c0bbae68297bff265d2fc691b5c0a2a1e2f15cfbe251cb407d9a746b7e48"} Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.532841 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" event={"ID":"d7422509-bd52-437b-9459-9c715c66fc33","Type":"ContainerStarted","Data":"c34933180d93ae0c413fe4e3ba797162b90bde253f6503dcbf0ea4405517bfca"} Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.533965 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.554575 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9" event={"ID":"be49b904-0667-4d74-ac81-e84600f0835e","Type":"ContainerStarted","Data":"067f0501bb28ee46c4dab9d0b265af305bd5580b555aabc3f22ccc19201445e0"} Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.554621 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9" event={"ID":"be49b904-0667-4d74-ac81-e84600f0835e","Type":"ContainerStarted","Data":"e460ec37f646594e2bfbe40ea47d0b74f2dd0eede1593eeb04f2a323e4f35cf9"} Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.554636 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.571104 4730 generic.go:334] "Generic (PLEG): container finished" podID="7a118148-49cc-4b61-bb43-44e3ef2c3048" containerID="18e38e92f40a21b89290233a7ffe301a018e759b353ad6883c83ed52a1e47762" exitCode=0 Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.571225 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cx74p" event={"ID":"7a118148-49cc-4b61-bb43-44e3ef2c3048","Type":"ContainerDied","Data":"18e38e92f40a21b89290233a7ffe301a018e759b353ad6883c83ed52a1e47762"} Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.571277 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cx74p" event={"ID":"7a118148-49cc-4b61-bb43-44e3ef2c3048","Type":"ContainerStarted","Data":"189336796f3983b3d071ba1d85e4dc4b864a1692cfe884a4465fbc035616d986"} Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.597521 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.597745 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" podStartSLOduration=3.597721972 podStartE2EDuration="3.597721972s" podCreationTimestamp="2026-03-20 15:42:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:48.596660949 +0000 UTC m=+227.810032318" watchObservedRunningTime="2026-03-20 15:42:48.597721972 +0000 UTC m=+227.811093341" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.598687 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.606850 4730 generic.go:334] "Generic (PLEG): container finished" podID="d5addb8e-1dbc-41a2-8330-8a97251bd52f" containerID="1ce763ed176ec4f4dede58163ade6fda497d7444c6c6f195c24a524a711de167" exitCode=0 Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.606969 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mbtfk" event={"ID":"d5addb8e-1dbc-41a2-8330-8a97251bd52f","Type":"ContainerDied","Data":"1ce763ed176ec4f4dede58163ade6fda497d7444c6c6f195c24a524a711de167"} Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.607021 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mbtfk" event={"ID":"d5addb8e-1dbc-41a2-8330-8a97251bd52f","Type":"ContainerStarted","Data":"542af9b157419c81e78bae4e6b0035cde152a688b5cf204d5b3fe35157527e94"} Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.609865 4730 generic.go:334] "Generic (PLEG): container finished" podID="be19fb65-a04f-42df-9b96-e620b58754bb" containerID="647092b460bb07570b06908ca4f98239d0470ba3df7bb23adf207cb830d51de7" exitCode=0 Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.609962 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567010-d69bc" event={"ID":"be19fb65-a04f-42df-9b96-e620b58754bb","Type":"ContainerDied","Data":"647092b460bb07570b06908ca4f98239d0470ba3df7bb23adf207cb830d51de7"} Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.628669 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" event={"ID":"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e","Type":"ContainerStarted","Data":"deb22e7f4b1084e112c47fe5c83d1b16157bc080fc85d2ff74bb28b439a9502d"} Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.628888 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.631722 4730 generic.go:334] "Generic (PLEG): container finished" podID="168c4cbd-3a44-48a5-be95-0eb4ea01d6c8" containerID="f14d75be5024f0d9fd4c3cf59a10c4fbb452ecc1d6a3188f6fd40ab5bbb8ffe9" exitCode=0 Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.632660 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.632999 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6mppz" event={"ID":"168c4cbd-3a44-48a5-be95-0eb4ea01d6c8","Type":"ContainerDied","Data":"f14d75be5024f0d9fd4c3cf59a10c4fbb452ecc1d6a3188f6fd40ab5bbb8ffe9"} Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.633042 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6mppz" event={"ID":"168c4cbd-3a44-48a5-be95-0eb4ea01d6c8","Type":"ContainerStarted","Data":"b24679ab1f6c7ce28e2ed00c17a4988d013e4500b53404671b46ef5509b85dc8"} Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.634658 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9" podStartSLOduration=3.634623648 podStartE2EDuration="3.634623648s" podCreationTimestamp="2026-03-20 15:42:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:48.628166489 +0000 UTC m=+227.841537858" watchObservedRunningTime="2026-03-20 15:42:48.634623648 +0000 UTC m=+227.847995017" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.659834 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.697450 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-92dt7" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.709113 4730 patch_prober.go:28] interesting pod/router-default-5444994796-92dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 15:42:48 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Mar 20 15:42:48 crc kubenswrapper[4730]: [+]process-running ok Mar 20 15:42:48 crc kubenswrapper[4730]: healthz check failed Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.710889 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-92dt7" podUID="18214bd2-9c3a-4737-885b-2b5c905311d8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.753027 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-flpw2"] Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.756962 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-flpw2" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.762437 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.779636 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-flpw2"] Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.813273 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.822959 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nsdw7" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.840823 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.840942 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.844473 4730 patch_prober.go:28] interesting pod/console-f9d7485db-9kgl8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.844549 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9kgl8" podUID="5edbd5a9-6c8b-4ef8-950f-58deaecf36ee" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.856405 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" podStartSLOduration=175.856385825 podStartE2EDuration="2m55.856385825s" podCreationTimestamp="2026-03-20 15:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:48.815441205 +0000 UTC m=+228.028812584" watchObservedRunningTime="2026-03-20 15:42:48.856385825 +0000 UTC m=+228.069757194" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.897468 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a347883-e4f7-4fcd-8920-59519533cf43-catalog-content\") pod \"redhat-marketplace-flpw2\" (UID: \"5a347883-e4f7-4fcd-8920-59519533cf43\") " pod="openshift-marketplace/redhat-marketplace-flpw2" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.897629 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a347883-e4f7-4fcd-8920-59519533cf43-utilities\") pod \"redhat-marketplace-flpw2\" (UID: \"5a347883-e4f7-4fcd-8920-59519533cf43\") " pod="openshift-marketplace/redhat-marketplace-flpw2" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.897691 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs46z\" (UniqueName: \"kubernetes.io/projected/5a347883-e4f7-4fcd-8920-59519533cf43-kube-api-access-hs46z\") pod \"redhat-marketplace-flpw2\" (UID: \"5a347883-e4f7-4fcd-8920-59519533cf43\") " pod="openshift-marketplace/redhat-marketplace-flpw2" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.943742 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.944200 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.971440 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.999134 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a347883-e4f7-4fcd-8920-59519533cf43-utilities\") pod \"redhat-marketplace-flpw2\" (UID: \"5a347883-e4f7-4fcd-8920-59519533cf43\") " pod="openshift-marketplace/redhat-marketplace-flpw2" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.999267 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs46z\" (UniqueName: \"kubernetes.io/projected/5a347883-e4f7-4fcd-8920-59519533cf43-kube-api-access-hs46z\") pod \"redhat-marketplace-flpw2\" (UID: \"5a347883-e4f7-4fcd-8920-59519533cf43\") " pod="openshift-marketplace/redhat-marketplace-flpw2" Mar 20 15:42:48 crc kubenswrapper[4730]: I0320 15:42:48.999390 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a347883-e4f7-4fcd-8920-59519533cf43-catalog-content\") pod \"redhat-marketplace-flpw2\" (UID: \"5a347883-e4f7-4fcd-8920-59519533cf43\") " pod="openshift-marketplace/redhat-marketplace-flpw2" Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.001137 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a347883-e4f7-4fcd-8920-59519533cf43-catalog-content\") pod \"redhat-marketplace-flpw2\" (UID: \"5a347883-e4f7-4fcd-8920-59519533cf43\") " pod="openshift-marketplace/redhat-marketplace-flpw2" Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.002576 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a347883-e4f7-4fcd-8920-59519533cf43-utilities\") pod \"redhat-marketplace-flpw2\" (UID: \"5a347883-e4f7-4fcd-8920-59519533cf43\") " pod="openshift-marketplace/redhat-marketplace-flpw2" Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.060338 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs46z\" (UniqueName: \"kubernetes.io/projected/5a347883-e4f7-4fcd-8920-59519533cf43-kube-api-access-hs46z\") pod \"redhat-marketplace-flpw2\" (UID: \"5a347883-e4f7-4fcd-8920-59519533cf43\") " pod="openshift-marketplace/redhat-marketplace-flpw2" Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.105402 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-flpw2" Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.159427 4730 patch_prober.go:28] interesting pod/downloads-7954f5f757-g7hdt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.159494 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-g7hdt" podUID="d32c9cec-9f6c-4304-8bc9-d2e52128470a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.159427 4730 patch_prober.go:28] interesting pod/downloads-7954f5f757-g7hdt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.159758 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-g7hdt" podUID="d32c9cec-9f6c-4304-8bc9-d2e52128470a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.178369 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.188069 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2z2hv"] Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.189139 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2z2hv" Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.201628 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2z2hv"] Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.306049 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/715cbff8-9674-4896-8deb-54a6e9a8899e-utilities\") pod \"redhat-marketplace-2z2hv\" (UID: \"715cbff8-9674-4896-8deb-54a6e9a8899e\") " pod="openshift-marketplace/redhat-marketplace-2z2hv" Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.306521 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/715cbff8-9674-4896-8deb-54a6e9a8899e-catalog-content\") pod \"redhat-marketplace-2z2hv\" (UID: \"715cbff8-9674-4896-8deb-54a6e9a8899e\") " pod="openshift-marketplace/redhat-marketplace-2z2hv" Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.306585 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmf5w\" (UniqueName: \"kubernetes.io/projected/715cbff8-9674-4896-8deb-54a6e9a8899e-kube-api-access-pmf5w\") pod \"redhat-marketplace-2z2hv\" (UID: \"715cbff8-9674-4896-8deb-54a6e9a8899e\") " pod="openshift-marketplace/redhat-marketplace-2z2hv" Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.408044 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/715cbff8-9674-4896-8deb-54a6e9a8899e-utilities\") pod \"redhat-marketplace-2z2hv\" (UID: \"715cbff8-9674-4896-8deb-54a6e9a8899e\") " pod="openshift-marketplace/redhat-marketplace-2z2hv" Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.408137 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/715cbff8-9674-4896-8deb-54a6e9a8899e-catalog-content\") pod \"redhat-marketplace-2z2hv\" (UID: \"715cbff8-9674-4896-8deb-54a6e9a8899e\") " pod="openshift-marketplace/redhat-marketplace-2z2hv" Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.408198 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmf5w\" (UniqueName: \"kubernetes.io/projected/715cbff8-9674-4896-8deb-54a6e9a8899e-kube-api-access-pmf5w\") pod \"redhat-marketplace-2z2hv\" (UID: \"715cbff8-9674-4896-8deb-54a6e9a8899e\") " pod="openshift-marketplace/redhat-marketplace-2z2hv" Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.409110 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/715cbff8-9674-4896-8deb-54a6e9a8899e-catalog-content\") pod \"redhat-marketplace-2z2hv\" (UID: \"715cbff8-9674-4896-8deb-54a6e9a8899e\") " pod="openshift-marketplace/redhat-marketplace-2z2hv" Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.409133 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/715cbff8-9674-4896-8deb-54a6e9a8899e-utilities\") pod \"redhat-marketplace-2z2hv\" (UID: \"715cbff8-9674-4896-8deb-54a6e9a8899e\") " pod="openshift-marketplace/redhat-marketplace-2z2hv" Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.463110 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmf5w\" (UniqueName: \"kubernetes.io/projected/715cbff8-9674-4896-8deb-54a6e9a8899e-kube-api-access-pmf5w\") pod \"redhat-marketplace-2z2hv\" (UID: \"715cbff8-9674-4896-8deb-54a6e9a8899e\") " pod="openshift-marketplace/redhat-marketplace-2z2hv" Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.564233 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a38d833-db72-4566-b139-7788730a502a" path="/var/lib/kubelet/pods/9a38d833-db72-4566-b139-7788730a502a/volumes" Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.601377 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2z2hv" Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.698856 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"23069a50-0f37-4d67-8cfd-e7a569cc6c92","Type":"ContainerStarted","Data":"ada4a56e9a542a91af4dbeb6d4cfcf6cb1bc38e39d46c8e511a8939bc38e8ff7"} Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.699203 4730 patch_prober.go:28] interesting pod/router-default-5444994796-92dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 15:42:49 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Mar 20 15:42:49 crc kubenswrapper[4730]: [+]process-running ok Mar 20 15:42:49 crc kubenswrapper[4730]: healthz check failed Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.699236 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-92dt7" podUID="18214bd2-9c3a-4737-885b-2b5c905311d8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.734763 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ed812b57-bdba-4cd0-be71-859fe5d52eba","Type":"ContainerStarted","Data":"519bceba9c186c0dc594c407511694240cba2dedd1d4d79a064d1216c34bbca8"} Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.734802 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ed812b57-bdba-4cd0-be71-859fe5d52eba","Type":"ContainerStarted","Data":"6a0cd6655c2e9c825b00626c88656cb2f35f4bd32136c526b0ab430d8a3a7e0e"} Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.751509 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-jzx77" Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.761559 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-klbh8" Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.769699 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.769680493 podStartE2EDuration="2.769680493s" podCreationTimestamp="2026-03-20 15:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:42:49.76794308 +0000 UTC m=+228.981314449" watchObservedRunningTime="2026-03-20 15:42:49.769680493 +0000 UTC m=+228.983051862" Mar 20 15:42:49 crc kubenswrapper[4730]: I0320 15:42:49.992862 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-flpw2"] Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.090888 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2z2hv"] Mar 20 15:42:50 crc kubenswrapper[4730]: W0320 15:42:50.124394 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod715cbff8_9674_4896_8deb_54a6e9a8899e.slice/crio-8cd9a9489d9a0dbe8438d77747b3f3bcf8afc79d1dcd6dcfc2035db6222041b2 WatchSource:0}: Error finding container 8cd9a9489d9a0dbe8438d77747b3f3bcf8afc79d1dcd6dcfc2035db6222041b2: Status 404 returned error can't find the container with id 8cd9a9489d9a0dbe8438d77747b3f3bcf8afc79d1dcd6dcfc2035db6222041b2 Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.155623 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8rptq"] Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.156648 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rptq" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.159602 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.176276 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8rptq"] Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.227372 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgx8f\" (UniqueName: \"kubernetes.io/projected/558b00fd-2589-4842-8cba-db0cffe8c826-kube-api-access-kgx8f\") pod \"redhat-operators-8rptq\" (UID: \"558b00fd-2589-4842-8cba-db0cffe8c826\") " pod="openshift-marketplace/redhat-operators-8rptq" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.227456 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558b00fd-2589-4842-8cba-db0cffe8c826-utilities\") pod \"redhat-operators-8rptq\" (UID: \"558b00fd-2589-4842-8cba-db0cffe8c826\") " pod="openshift-marketplace/redhat-operators-8rptq" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.227493 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558b00fd-2589-4842-8cba-db0cffe8c826-catalog-content\") pod \"redhat-operators-8rptq\" (UID: \"558b00fd-2589-4842-8cba-db0cffe8c826\") " pod="openshift-marketplace/redhat-operators-8rptq" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.329380 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgx8f\" (UniqueName: \"kubernetes.io/projected/558b00fd-2589-4842-8cba-db0cffe8c826-kube-api-access-kgx8f\") pod \"redhat-operators-8rptq\" (UID: \"558b00fd-2589-4842-8cba-db0cffe8c826\") " pod="openshift-marketplace/redhat-operators-8rptq" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.329490 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558b00fd-2589-4842-8cba-db0cffe8c826-utilities\") pod \"redhat-operators-8rptq\" (UID: \"558b00fd-2589-4842-8cba-db0cffe8c826\") " pod="openshift-marketplace/redhat-operators-8rptq" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.329546 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558b00fd-2589-4842-8cba-db0cffe8c826-catalog-content\") pod \"redhat-operators-8rptq\" (UID: \"558b00fd-2589-4842-8cba-db0cffe8c826\") " pod="openshift-marketplace/redhat-operators-8rptq" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.330223 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558b00fd-2589-4842-8cba-db0cffe8c826-catalog-content\") pod \"redhat-operators-8rptq\" (UID: \"558b00fd-2589-4842-8cba-db0cffe8c826\") " pod="openshift-marketplace/redhat-operators-8rptq" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.330296 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558b00fd-2589-4842-8cba-db0cffe8c826-utilities\") pod \"redhat-operators-8rptq\" (UID: \"558b00fd-2589-4842-8cba-db0cffe8c826\") " pod="openshift-marketplace/redhat-operators-8rptq" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.377490 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgx8f\" (UniqueName: \"kubernetes.io/projected/558b00fd-2589-4842-8cba-db0cffe8c826-kube-api-access-kgx8f\") pod \"redhat-operators-8rptq\" (UID: \"558b00fd-2589-4842-8cba-db0cffe8c826\") " pod="openshift-marketplace/redhat-operators-8rptq" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.494196 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rptq" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.546160 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qmxvf"] Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.549216 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qmxvf" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.559959 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qmxvf"] Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.639756 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c86g2\" (UniqueName: \"kubernetes.io/projected/ab6c90a0-1bc1-476d-8526-d1fe438163e3-kube-api-access-c86g2\") pod \"redhat-operators-qmxvf\" (UID: \"ab6c90a0-1bc1-476d-8526-d1fe438163e3\") " pod="openshift-marketplace/redhat-operators-qmxvf" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.639839 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab6c90a0-1bc1-476d-8526-d1fe438163e3-catalog-content\") pod \"redhat-operators-qmxvf\" (UID: \"ab6c90a0-1bc1-476d-8526-d1fe438163e3\") " pod="openshift-marketplace/redhat-operators-qmxvf" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.639879 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab6c90a0-1bc1-476d-8526-d1fe438163e3-utilities\") pod \"redhat-operators-qmxvf\" (UID: \"ab6c90a0-1bc1-476d-8526-d1fe438163e3\") " pod="openshift-marketplace/redhat-operators-qmxvf" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.660267 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567010-d69bc" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.710520 4730 patch_prober.go:28] interesting pod/router-default-5444994796-92dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 15:42:50 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Mar 20 15:42:50 crc kubenswrapper[4730]: [+]process-running ok Mar 20 15:42:50 crc kubenswrapper[4730]: healthz check failed Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.710594 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-92dt7" podUID="18214bd2-9c3a-4737-885b-2b5c905311d8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.744078 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbnh9\" (UniqueName: \"kubernetes.io/projected/be19fb65-a04f-42df-9b96-e620b58754bb-kube-api-access-wbnh9\") pod \"be19fb65-a04f-42df-9b96-e620b58754bb\" (UID: \"be19fb65-a04f-42df-9b96-e620b58754bb\") " Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.744306 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be19fb65-a04f-42df-9b96-e620b58754bb-secret-volume\") pod \"be19fb65-a04f-42df-9b96-e620b58754bb\" (UID: \"be19fb65-a04f-42df-9b96-e620b58754bb\") " Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.744349 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be19fb65-a04f-42df-9b96-e620b58754bb-config-volume\") pod \"be19fb65-a04f-42df-9b96-e620b58754bb\" (UID: \"be19fb65-a04f-42df-9b96-e620b58754bb\") " Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.745020 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c86g2\" (UniqueName: \"kubernetes.io/projected/ab6c90a0-1bc1-476d-8526-d1fe438163e3-kube-api-access-c86g2\") pod \"redhat-operators-qmxvf\" (UID: \"ab6c90a0-1bc1-476d-8526-d1fe438163e3\") " pod="openshift-marketplace/redhat-operators-qmxvf" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.745126 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab6c90a0-1bc1-476d-8526-d1fe438163e3-catalog-content\") pod \"redhat-operators-qmxvf\" (UID: \"ab6c90a0-1bc1-476d-8526-d1fe438163e3\") " pod="openshift-marketplace/redhat-operators-qmxvf" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.745165 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab6c90a0-1bc1-476d-8526-d1fe438163e3-utilities\") pod \"redhat-operators-qmxvf\" (UID: \"ab6c90a0-1bc1-476d-8526-d1fe438163e3\") " pod="openshift-marketplace/redhat-operators-qmxvf" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.745810 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab6c90a0-1bc1-476d-8526-d1fe438163e3-utilities\") pod \"redhat-operators-qmxvf\" (UID: \"ab6c90a0-1bc1-476d-8526-d1fe438163e3\") " pod="openshift-marketplace/redhat-operators-qmxvf" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.746108 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab6c90a0-1bc1-476d-8526-d1fe438163e3-catalog-content\") pod \"redhat-operators-qmxvf\" (UID: \"ab6c90a0-1bc1-476d-8526-d1fe438163e3\") " pod="openshift-marketplace/redhat-operators-qmxvf" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.746117 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be19fb65-a04f-42df-9b96-e620b58754bb-config-volume" (OuterVolumeSpecName: "config-volume") pod "be19fb65-a04f-42df-9b96-e620b58754bb" (UID: "be19fb65-a04f-42df-9b96-e620b58754bb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.757806 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be19fb65-a04f-42df-9b96-e620b58754bb-kube-api-access-wbnh9" (OuterVolumeSpecName: "kube-api-access-wbnh9") pod "be19fb65-a04f-42df-9b96-e620b58754bb" (UID: "be19fb65-a04f-42df-9b96-e620b58754bb"). InnerVolumeSpecName "kube-api-access-wbnh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.759789 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be19fb65-a04f-42df-9b96-e620b58754bb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "be19fb65-a04f-42df-9b96-e620b58754bb" (UID: "be19fb65-a04f-42df-9b96-e620b58754bb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.796242 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c86g2\" (UniqueName: \"kubernetes.io/projected/ab6c90a0-1bc1-476d-8526-d1fe438163e3-kube-api-access-c86g2\") pod \"redhat-operators-qmxvf\" (UID: \"ab6c90a0-1bc1-476d-8526-d1fe438163e3\") " pod="openshift-marketplace/redhat-operators-qmxvf" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.811142 4730 generic.go:334] "Generic (PLEG): container finished" podID="5a347883-e4f7-4fcd-8920-59519533cf43" containerID="1e2d0f7b622d4a27e1b76b3b32f61e354d6ed5f7ddeb8e6368356819c35fc74f" exitCode=0 Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.811286 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-flpw2" event={"ID":"5a347883-e4f7-4fcd-8920-59519533cf43","Type":"ContainerDied","Data":"1e2d0f7b622d4a27e1b76b3b32f61e354d6ed5f7ddeb8e6368356819c35fc74f"} Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.811333 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-flpw2" event={"ID":"5a347883-e4f7-4fcd-8920-59519533cf43","Type":"ContainerStarted","Data":"9a27ed5cf68d2bc6928d37904a276f94c614d0e58deaf1534a9994ffbccaa224"} Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.823667 4730 generic.go:334] "Generic (PLEG): container finished" podID="715cbff8-9674-4896-8deb-54a6e9a8899e" containerID="3d0d0e86eafcfd3f1d67f4c8dfdc39b982cb1033b59bc5dcda037270619199e3" exitCode=0 Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.823939 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2z2hv" event={"ID":"715cbff8-9674-4896-8deb-54a6e9a8899e","Type":"ContainerDied","Data":"3d0d0e86eafcfd3f1d67f4c8dfdc39b982cb1033b59bc5dcda037270619199e3"} Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.824003 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2z2hv" event={"ID":"715cbff8-9674-4896-8deb-54a6e9a8899e","Type":"ContainerStarted","Data":"8cd9a9489d9a0dbe8438d77747b3f3bcf8afc79d1dcd6dcfc2035db6222041b2"} Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.832106 4730 generic.go:334] "Generic (PLEG): container finished" podID="ed812b57-bdba-4cd0-be71-859fe5d52eba" containerID="519bceba9c186c0dc594c407511694240cba2dedd1d4d79a064d1216c34bbca8" exitCode=0 Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.832701 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ed812b57-bdba-4cd0-be71-859fe5d52eba","Type":"ContainerDied","Data":"519bceba9c186c0dc594c407511694240cba2dedd1d4d79a064d1216c34bbca8"} Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.846556 4730 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be19fb65-a04f-42df-9b96-e620b58754bb-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.847469 4730 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be19fb65-a04f-42df-9b96-e620b58754bb-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.847493 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbnh9\" (UniqueName: \"kubernetes.io/projected/be19fb65-a04f-42df-9b96-e620b58754bb-kube-api-access-wbnh9\") on node \"crc\" DevicePath \"\"" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.875261 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567010-d69bc" event={"ID":"be19fb65-a04f-42df-9b96-e620b58754bb","Type":"ContainerDied","Data":"c95af936fdf0b6d7d2255a8837cf5081d1d6b867d8c027bb70bec58e5bed039e"} Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.875298 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c95af936fdf0b6d7d2255a8837cf5081d1d6b867d8c027bb70bec58e5bed039e" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.875385 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567010-d69bc" Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.903782 4730 generic.go:334] "Generic (PLEG): container finished" podID="23069a50-0f37-4d67-8cfd-e7a569cc6c92" containerID="1795736ee21f4075505b0da99054eb59518951bfa57d0bc7b621b63bd06d7d99" exitCode=0 Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.904310 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"23069a50-0f37-4d67-8cfd-e7a569cc6c92","Type":"ContainerDied","Data":"1795736ee21f4075505b0da99054eb59518951bfa57d0bc7b621b63bd06d7d99"} Mar 20 15:42:50 crc kubenswrapper[4730]: I0320 15:42:50.972177 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qmxvf" Mar 20 15:42:51 crc kubenswrapper[4730]: I0320 15:42:51.036372 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8rptq"] Mar 20 15:42:51 crc kubenswrapper[4730]: I0320 15:42:51.279844 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qmxvf"] Mar 20 15:42:51 crc kubenswrapper[4730]: W0320 15:42:51.294364 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab6c90a0_1bc1_476d_8526_d1fe438163e3.slice/crio-3663f63ee1b2ff284c18b50ae774d9b77f54d7f929ad2803b36f2f39057f8d54 WatchSource:0}: Error finding container 3663f63ee1b2ff284c18b50ae774d9b77f54d7f929ad2803b36f2f39057f8d54: Status 404 returned error can't find the container with id 3663f63ee1b2ff284c18b50ae774d9b77f54d7f929ad2803b36f2f39057f8d54 Mar 20 15:42:51 crc kubenswrapper[4730]: I0320 15:42:51.698363 4730 patch_prober.go:28] interesting pod/router-default-5444994796-92dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 15:42:51 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Mar 20 15:42:51 crc kubenswrapper[4730]: [+]process-running ok Mar 20 15:42:51 crc kubenswrapper[4730]: healthz check failed Mar 20 15:42:51 crc kubenswrapper[4730]: I0320 15:42:51.698627 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-92dt7" podUID="18214bd2-9c3a-4737-885b-2b5c905311d8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 15:42:51 crc kubenswrapper[4730]: I0320 15:42:51.914630 4730 generic.go:334] "Generic (PLEG): container finished" podID="558b00fd-2589-4842-8cba-db0cffe8c826" containerID="c2ed9a8424613f58ef80e56f4945d0a2f01a1e4ab20cce8fcafa4c0b7fc30d73" exitCode=0 Mar 20 15:42:51 crc kubenswrapper[4730]: I0320 15:42:51.914712 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rptq" event={"ID":"558b00fd-2589-4842-8cba-db0cffe8c826","Type":"ContainerDied","Data":"c2ed9a8424613f58ef80e56f4945d0a2f01a1e4ab20cce8fcafa4c0b7fc30d73"} Mar 20 15:42:51 crc kubenswrapper[4730]: I0320 15:42:51.914741 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rptq" event={"ID":"558b00fd-2589-4842-8cba-db0cffe8c826","Type":"ContainerStarted","Data":"8b528dc3e6323a70e8b05e1cb0a0d95967e9a6d57d83e5d00d37458aa2621e38"} Mar 20 15:42:51 crc kubenswrapper[4730]: I0320 15:42:51.923750 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmxvf" event={"ID":"ab6c90a0-1bc1-476d-8526-d1fe438163e3","Type":"ContainerDied","Data":"6a84b3881231514a341a3dd596d04f1b46d9c8c10a2296046ee0ddb6c55675a3"} Mar 20 15:42:51 crc kubenswrapper[4730]: I0320 15:42:51.923663 4730 generic.go:334] "Generic (PLEG): container finished" podID="ab6c90a0-1bc1-476d-8526-d1fe438163e3" containerID="6a84b3881231514a341a3dd596d04f1b46d9c8c10a2296046ee0ddb6c55675a3" exitCode=0 Mar 20 15:42:51 crc kubenswrapper[4730]: I0320 15:42:51.924039 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmxvf" event={"ID":"ab6c90a0-1bc1-476d-8526-d1fe438163e3","Type":"ContainerStarted","Data":"3663f63ee1b2ff284c18b50ae774d9b77f54d7f929ad2803b36f2f39057f8d54"} Mar 20 15:42:52 crc kubenswrapper[4730]: I0320 15:42:52.325953 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 15:42:52 crc kubenswrapper[4730]: I0320 15:42:52.361448 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 15:42:52 crc kubenswrapper[4730]: I0320 15:42:52.368008 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23069a50-0f37-4d67-8cfd-e7a569cc6c92-kube-api-access\") pod \"23069a50-0f37-4d67-8cfd-e7a569cc6c92\" (UID: \"23069a50-0f37-4d67-8cfd-e7a569cc6c92\") " Mar 20 15:42:52 crc kubenswrapper[4730]: I0320 15:42:52.368101 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed812b57-bdba-4cd0-be71-859fe5d52eba-kubelet-dir\") pod \"ed812b57-bdba-4cd0-be71-859fe5d52eba\" (UID: \"ed812b57-bdba-4cd0-be71-859fe5d52eba\") " Mar 20 15:42:52 crc kubenswrapper[4730]: I0320 15:42:52.368182 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed812b57-bdba-4cd0-be71-859fe5d52eba-kube-api-access\") pod \"ed812b57-bdba-4cd0-be71-859fe5d52eba\" (UID: \"ed812b57-bdba-4cd0-be71-859fe5d52eba\") " Mar 20 15:42:52 crc kubenswrapper[4730]: I0320 15:42:52.368204 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed812b57-bdba-4cd0-be71-859fe5d52eba-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ed812b57-bdba-4cd0-be71-859fe5d52eba" (UID: "ed812b57-bdba-4cd0-be71-859fe5d52eba"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:42:52 crc kubenswrapper[4730]: I0320 15:42:52.368216 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23069a50-0f37-4d67-8cfd-e7a569cc6c92-kubelet-dir\") pod \"23069a50-0f37-4d67-8cfd-e7a569cc6c92\" (UID: \"23069a50-0f37-4d67-8cfd-e7a569cc6c92\") " Mar 20 15:42:52 crc kubenswrapper[4730]: I0320 15:42:52.368264 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23069a50-0f37-4d67-8cfd-e7a569cc6c92-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "23069a50-0f37-4d67-8cfd-e7a569cc6c92" (UID: "23069a50-0f37-4d67-8cfd-e7a569cc6c92"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:42:52 crc kubenswrapper[4730]: I0320 15:42:52.368730 4730 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed812b57-bdba-4cd0-be71-859fe5d52eba-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 20 15:42:52 crc kubenswrapper[4730]: I0320 15:42:52.368748 4730 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23069a50-0f37-4d67-8cfd-e7a569cc6c92-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 20 15:42:52 crc kubenswrapper[4730]: I0320 15:42:52.379937 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23069a50-0f37-4d67-8cfd-e7a569cc6c92-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "23069a50-0f37-4d67-8cfd-e7a569cc6c92" (UID: "23069a50-0f37-4d67-8cfd-e7a569cc6c92"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:42:52 crc kubenswrapper[4730]: I0320 15:42:52.380327 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed812b57-bdba-4cd0-be71-859fe5d52eba-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ed812b57-bdba-4cd0-be71-859fe5d52eba" (UID: "ed812b57-bdba-4cd0-be71-859fe5d52eba"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:42:52 crc kubenswrapper[4730]: I0320 15:42:52.470116 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23069a50-0f37-4d67-8cfd-e7a569cc6c92-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 15:42:52 crc kubenswrapper[4730]: I0320 15:42:52.470158 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed812b57-bdba-4cd0-be71-859fe5d52eba-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 15:42:52 crc kubenswrapper[4730]: I0320 15:42:52.698310 4730 patch_prober.go:28] interesting pod/router-default-5444994796-92dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 15:42:52 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Mar 20 15:42:52 crc kubenswrapper[4730]: [+]process-running ok Mar 20 15:42:52 crc kubenswrapper[4730]: healthz check failed Mar 20 15:42:52 crc kubenswrapper[4730]: I0320 15:42:52.698377 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-92dt7" podUID="18214bd2-9c3a-4737-885b-2b5c905311d8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 15:42:52 crc kubenswrapper[4730]: I0320 15:42:52.934806 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ed812b57-bdba-4cd0-be71-859fe5d52eba","Type":"ContainerDied","Data":"6a0cd6655c2e9c825b00626c88656cb2f35f4bd32136c526b0ab430d8a3a7e0e"} Mar 20 15:42:52 crc kubenswrapper[4730]: I0320 15:42:52.934840 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 15:42:52 crc kubenswrapper[4730]: I0320 15:42:52.934857 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a0cd6655c2e9c825b00626c88656cb2f35f4bd32136c526b0ab430d8a3a7e0e" Mar 20 15:42:52 crc kubenswrapper[4730]: I0320 15:42:52.949429 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"23069a50-0f37-4d67-8cfd-e7a569cc6c92","Type":"ContainerDied","Data":"ada4a56e9a542a91af4dbeb6d4cfcf6cb1bc38e39d46c8e511a8939bc38e8ff7"} Mar 20 15:42:52 crc kubenswrapper[4730]: I0320 15:42:52.949483 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ada4a56e9a542a91af4dbeb6d4cfcf6cb1bc38e39d46c8e511a8939bc38e8ff7" Mar 20 15:42:52 crc kubenswrapper[4730]: I0320 15:42:52.949573 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 15:42:53 crc kubenswrapper[4730]: I0320 15:42:53.333280 4730 ???:1] "http: TLS handshake error from 192.168.126.11:34094: no serving certificate available for the kubelet" Mar 20 15:42:53 crc kubenswrapper[4730]: I0320 15:42:53.699028 4730 patch_prober.go:28] interesting pod/router-default-5444994796-92dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 15:42:53 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Mar 20 15:42:53 crc kubenswrapper[4730]: [+]process-running ok Mar 20 15:42:53 crc kubenswrapper[4730]: healthz check failed Mar 20 15:42:53 crc kubenswrapper[4730]: I0320 15:42:53.699323 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-92dt7" podUID="18214bd2-9c3a-4737-885b-2b5c905311d8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 15:42:54 crc kubenswrapper[4730]: I0320 15:42:54.372163 4730 ???:1] "http: TLS handshake error from 192.168.126.11:34888: no serving certificate available for the kubelet" Mar 20 15:42:54 crc kubenswrapper[4730]: I0320 15:42:54.699352 4730 patch_prober.go:28] interesting pod/router-default-5444994796-92dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 15:42:54 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Mar 20 15:42:54 crc kubenswrapper[4730]: [+]process-running ok Mar 20 15:42:54 crc kubenswrapper[4730]: healthz check failed Mar 20 15:42:54 crc kubenswrapper[4730]: I0320 15:42:54.699405 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-92dt7" podUID="18214bd2-9c3a-4737-885b-2b5c905311d8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 15:42:54 crc kubenswrapper[4730]: I0320 15:42:54.795319 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-7ckfm" Mar 20 15:42:55 crc kubenswrapper[4730]: I0320 15:42:55.698364 4730 patch_prober.go:28] interesting pod/router-default-5444994796-92dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 15:42:55 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Mar 20 15:42:55 crc kubenswrapper[4730]: [+]process-running ok Mar 20 15:42:55 crc kubenswrapper[4730]: healthz check failed Mar 20 15:42:55 crc kubenswrapper[4730]: I0320 15:42:55.698439 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-92dt7" podUID="18214bd2-9c3a-4737-885b-2b5c905311d8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 15:42:56 crc kubenswrapper[4730]: I0320 15:42:56.700059 4730 patch_prober.go:28] interesting pod/router-default-5444994796-92dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 15:42:56 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Mar 20 15:42:56 crc kubenswrapper[4730]: [+]process-running ok Mar 20 15:42:56 crc kubenswrapper[4730]: healthz check failed Mar 20 15:42:56 crc kubenswrapper[4730]: I0320 15:42:56.700121 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-92dt7" podUID="18214bd2-9c3a-4737-885b-2b5c905311d8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 15:42:57 crc kubenswrapper[4730]: I0320 15:42:57.698230 4730 patch_prober.go:28] interesting pod/router-default-5444994796-92dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 15:42:57 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Mar 20 15:42:57 crc kubenswrapper[4730]: [+]process-running ok Mar 20 15:42:57 crc kubenswrapper[4730]: healthz check failed Mar 20 15:42:57 crc kubenswrapper[4730]: I0320 15:42:57.698621 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-92dt7" podUID="18214bd2-9c3a-4737-885b-2b5c905311d8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 15:42:58 crc kubenswrapper[4730]: I0320 15:42:58.747772 4730 patch_prober.go:28] interesting pod/router-default-5444994796-92dt7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 15:42:58 crc kubenswrapper[4730]: [-]has-synced failed: reason withheld Mar 20 15:42:58 crc kubenswrapper[4730]: [+]process-running ok Mar 20 15:42:58 crc kubenswrapper[4730]: healthz check failed Mar 20 15:42:58 crc kubenswrapper[4730]: I0320 15:42:58.747837 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-92dt7" podUID="18214bd2-9c3a-4737-885b-2b5c905311d8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 15:42:58 crc kubenswrapper[4730]: I0320 15:42:58.845742 4730 patch_prober.go:28] interesting pod/console-f9d7485db-9kgl8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Mar 20 15:42:58 crc kubenswrapper[4730]: I0320 15:42:58.845799 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9kgl8" podUID="5edbd5a9-6c8b-4ef8-950f-58deaecf36ee" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Mar 20 15:42:59 crc kubenswrapper[4730]: I0320 15:42:59.159722 4730 patch_prober.go:28] interesting pod/downloads-7954f5f757-g7hdt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Mar 20 15:42:59 crc kubenswrapper[4730]: I0320 15:42:59.159782 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-g7hdt" podUID="d32c9cec-9f6c-4304-8bc9-d2e52128470a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Mar 20 15:42:59 crc kubenswrapper[4730]: I0320 15:42:59.160117 4730 patch_prober.go:28] interesting pod/downloads-7954f5f757-g7hdt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Mar 20 15:42:59 crc kubenswrapper[4730]: I0320 15:42:59.160199 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-g7hdt" podUID="d32c9cec-9f6c-4304-8bc9-d2e52128470a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Mar 20 15:42:59 crc kubenswrapper[4730]: I0320 15:42:59.698527 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-92dt7" Mar 20 15:42:59 crc kubenswrapper[4730]: I0320 15:42:59.708203 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-92dt7" Mar 20 15:43:03 crc kubenswrapper[4730]: I0320 15:43:03.601013 4730 ???:1] "http: TLS handshake error from 192.168.126.11:48288: no serving certificate available for the kubelet" Mar 20 15:43:04 crc kubenswrapper[4730]: I0320 15:43:04.143321 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59447fbd49-wdtl4"] Mar 20 15:43:04 crc kubenswrapper[4730]: I0320 15:43:04.151066 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9"] Mar 20 15:43:04 crc kubenswrapper[4730]: I0320 15:43:04.151600 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9" podUID="be49b904-0667-4d74-ac81-e84600f0835e" containerName="route-controller-manager" containerID="cri-o://067f0501bb28ee46c4dab9d0b265af305bd5580b555aabc3f22ccc19201445e0" gracePeriod=30 Mar 20 15:43:04 crc kubenswrapper[4730]: I0320 15:43:04.151807 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" podUID="d7422509-bd52-437b-9459-9c715c66fc33" containerName="controller-manager" containerID="cri-o://97c7c0bbae68297bff265d2fc691b5c0a2a1e2f15cfbe251cb407d9a746b7e48" gracePeriod=30 Mar 20 15:43:05 crc kubenswrapper[4730]: I0320 15:43:05.017242 4730 generic.go:334] "Generic (PLEG): container finished" podID="d7422509-bd52-437b-9459-9c715c66fc33" containerID="97c7c0bbae68297bff265d2fc691b5c0a2a1e2f15cfbe251cb407d9a746b7e48" exitCode=0 Mar 20 15:43:05 crc kubenswrapper[4730]: I0320 15:43:05.017293 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" event={"ID":"d7422509-bd52-437b-9459-9c715c66fc33","Type":"ContainerDied","Data":"97c7c0bbae68297bff265d2fc691b5c0a2a1e2f15cfbe251cb407d9a746b7e48"} Mar 20 15:43:05 crc kubenswrapper[4730]: I0320 15:43:05.019790 4730 generic.go:334] "Generic (PLEG): container finished" podID="be49b904-0667-4d74-ac81-e84600f0835e" containerID="067f0501bb28ee46c4dab9d0b265af305bd5580b555aabc3f22ccc19201445e0" exitCode=0 Mar 20 15:43:05 crc kubenswrapper[4730]: I0320 15:43:05.019896 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9" event={"ID":"be49b904-0667-4d74-ac81-e84600f0835e","Type":"ContainerDied","Data":"067f0501bb28ee46c4dab9d0b265af305bd5580b555aabc3f22ccc19201445e0"} Mar 20 15:43:07 crc kubenswrapper[4730]: I0320 15:43:07.059139 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:43:07 crc kubenswrapper[4730]: I0320 15:43:07.313884 4730 patch_prober.go:28] interesting pod/route-controller-manager-58875cfd6f-xthh9 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.47:8443/healthz\": dial tcp 10.217.0.47:8443: connect: connection refused" start-of-body= Mar 20 15:43:07 crc kubenswrapper[4730]: I0320 15:43:07.313937 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9" podUID="be49b904-0667-4d74-ac81-e84600f0835e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.47:8443/healthz\": dial tcp 10.217.0.47:8443: connect: connection refused" Mar 20 15:43:07 crc kubenswrapper[4730]: I0320 15:43:07.314419 4730 patch_prober.go:28] interesting pod/controller-manager-59447fbd49-wdtl4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.46:8443/healthz\": dial tcp 10.217.0.46:8443: connect: connection refused" start-of-body= Mar 20 15:43:07 crc kubenswrapper[4730]: I0320 15:43:07.314471 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" podUID="d7422509-bd52-437b-9459-9c715c66fc33" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.46:8443/healthz\": dial tcp 10.217.0.46:8443: connect: connection refused" Mar 20 15:43:08 crc kubenswrapper[4730]: I0320 15:43:08.845312 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:43:08 crc kubenswrapper[4730]: I0320 15:43:08.849098 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:43:09 crc kubenswrapper[4730]: I0320 15:43:09.171131 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-g7hdt" Mar 20 15:43:12 crc kubenswrapper[4730]: I0320 15:43:12.879937 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 15:43:12 crc kubenswrapper[4730]: I0320 15:43:12.880464 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 15:43:16 crc kubenswrapper[4730]: E0320 15:43:16.689072 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Mar 20 15:43:16 crc kubenswrapper[4730]: E0320 15:43:16.689224 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:43:16 crc kubenswrapper[4730]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Mar 20 15:43:16 crc kubenswrapper[4730]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-46gxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29567022-wf5nv_openshift-infra(7d87adfe-3206-4175-8d8f-5a00015cc61e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Mar 20 15:43:16 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:43:16 crc kubenswrapper[4730]: E0320 15:43:16.690461 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29567022-wf5nv" podUID="7d87adfe-3206-4175-8d8f-5a00015cc61e" Mar 20 15:43:17 crc kubenswrapper[4730]: E0320 15:43:17.091155 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29567022-wf5nv" podUID="7d87adfe-3206-4175-8d8f-5a00015cc61e" Mar 20 15:43:18 crc kubenswrapper[4730]: I0320 15:43:18.313361 4730 patch_prober.go:28] interesting pod/controller-manager-59447fbd49-wdtl4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.46:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 15:43:18 crc kubenswrapper[4730]: I0320 15:43:18.313733 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" podUID="d7422509-bd52-437b-9459-9c715c66fc33" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.46:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 15:43:18 crc kubenswrapper[4730]: I0320 15:43:18.313388 4730 patch_prober.go:28] interesting pod/route-controller-manager-58875cfd6f-xthh9 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.47:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 15:43:18 crc kubenswrapper[4730]: I0320 15:43:18.313827 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9" podUID="be49b904-0667-4d74-ac81-e84600f0835e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.47:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 15:43:19 crc kubenswrapper[4730]: I0320 15:43:19.690098 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4xlq" Mar 20 15:43:21 crc kubenswrapper[4730]: E0320 15:43:21.387573 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage3399164236/2\": happened during read: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 20 15:43:21 crc kubenswrapper[4730]: E0320 15:43:21.387837 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kgx8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-8rptq_openshift-marketplace(558b00fd-2589-4842-8cba-db0cffe8c826): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage3399164236/2\": happened during read: context canceled" logger="UnhandledError" Mar 20 15:43:21 crc kubenswrapper[4730]: E0320 15:43:21.389136 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \\\"/var/tmp/container_images_storage3399164236/2\\\": happened during read: context canceled\"" pod="openshift-marketplace/redhat-operators-8rptq" podUID="558b00fd-2589-4842-8cba-db0cffe8c826" Mar 20 15:43:21 crc kubenswrapper[4730]: E0320 15:43:21.399331 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage824141395/2\": happened during read: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 20 15:43:21 crc kubenswrapper[4730]: E0320 15:43:21.399525 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c86g2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-qmxvf_openshift-marketplace(ab6c90a0-1bc1-476d-8526-d1fe438163e3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage824141395/2\": happened during read: context canceled" logger="UnhandledError" Mar 20 15:43:21 crc kubenswrapper[4730]: E0320 15:43:21.400759 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \\\"/var/tmp/container_images_storage824141395/2\\\": happened during read: context canceled\"" pod="openshift-marketplace/redhat-operators-qmxvf" podUID="ab6c90a0-1bc1-476d-8526-d1fe438163e3" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.426301 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.431432 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.456365 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2"] Mar 20 15:43:21 crc kubenswrapper[4730]: E0320 15:43:21.456571 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23069a50-0f37-4d67-8cfd-e7a569cc6c92" containerName="pruner" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.456582 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="23069a50-0f37-4d67-8cfd-e7a569cc6c92" containerName="pruner" Mar 20 15:43:21 crc kubenswrapper[4730]: E0320 15:43:21.456660 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be49b904-0667-4d74-ac81-e84600f0835e" containerName="route-controller-manager" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.456668 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="be49b904-0667-4d74-ac81-e84600f0835e" containerName="route-controller-manager" Mar 20 15:43:21 crc kubenswrapper[4730]: E0320 15:43:21.456699 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed812b57-bdba-4cd0-be71-859fe5d52eba" containerName="pruner" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.456705 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed812b57-bdba-4cd0-be71-859fe5d52eba" containerName="pruner" Mar 20 15:43:21 crc kubenswrapper[4730]: E0320 15:43:21.456714 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7422509-bd52-437b-9459-9c715c66fc33" containerName="controller-manager" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.456719 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7422509-bd52-437b-9459-9c715c66fc33" containerName="controller-manager" Mar 20 15:43:21 crc kubenswrapper[4730]: E0320 15:43:21.456726 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be19fb65-a04f-42df-9b96-e620b58754bb" containerName="collect-profiles" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.456731 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="be19fb65-a04f-42df-9b96-e620b58754bb" containerName="collect-profiles" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.456868 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="23069a50-0f37-4d67-8cfd-e7a569cc6c92" containerName="pruner" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.456881 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="be49b904-0667-4d74-ac81-e84600f0835e" containerName="route-controller-manager" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.456890 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed812b57-bdba-4cd0-be71-859fe5d52eba" containerName="pruner" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.456896 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="be19fb65-a04f-42df-9b96-e620b58754bb" containerName="collect-profiles" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.456903 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7422509-bd52-437b-9459-9c715c66fc33" containerName="controller-manager" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.457418 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.467833 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2"] Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.517035 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7422509-bd52-437b-9459-9c715c66fc33-config\") pod \"d7422509-bd52-437b-9459-9c715c66fc33\" (UID: \"d7422509-bd52-437b-9459-9c715c66fc33\") " Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.517109 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be49b904-0667-4d74-ac81-e84600f0835e-config\") pod \"be49b904-0667-4d74-ac81-e84600f0835e\" (UID: \"be49b904-0667-4d74-ac81-e84600f0835e\") " Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.517207 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7422509-bd52-437b-9459-9c715c66fc33-proxy-ca-bundles\") pod \"d7422509-bd52-437b-9459-9c715c66fc33\" (UID: \"d7422509-bd52-437b-9459-9c715c66fc33\") " Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.517233 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be49b904-0667-4d74-ac81-e84600f0835e-client-ca\") pod \"be49b904-0667-4d74-ac81-e84600f0835e\" (UID: \"be49b904-0667-4d74-ac81-e84600f0835e\") " Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.517308 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7422509-bd52-437b-9459-9c715c66fc33-serving-cert\") pod \"d7422509-bd52-437b-9459-9c715c66fc33\" (UID: \"d7422509-bd52-437b-9459-9c715c66fc33\") " Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.517337 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be49b904-0667-4d74-ac81-e84600f0835e-serving-cert\") pod \"be49b904-0667-4d74-ac81-e84600f0835e\" (UID: \"be49b904-0667-4d74-ac81-e84600f0835e\") " Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.517388 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgkk5\" (UniqueName: \"kubernetes.io/projected/be49b904-0667-4d74-ac81-e84600f0835e-kube-api-access-dgkk5\") pod \"be49b904-0667-4d74-ac81-e84600f0835e\" (UID: \"be49b904-0667-4d74-ac81-e84600f0835e\") " Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.517450 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2755g\" (UniqueName: \"kubernetes.io/projected/d7422509-bd52-437b-9459-9c715c66fc33-kube-api-access-2755g\") pod \"d7422509-bd52-437b-9459-9c715c66fc33\" (UID: \"d7422509-bd52-437b-9459-9c715c66fc33\") " Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.518151 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be49b904-0667-4d74-ac81-e84600f0835e-client-ca" (OuterVolumeSpecName: "client-ca") pod "be49b904-0667-4d74-ac81-e84600f0835e" (UID: "be49b904-0667-4d74-ac81-e84600f0835e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.518193 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be49b904-0667-4d74-ac81-e84600f0835e-config" (OuterVolumeSpecName: "config") pod "be49b904-0667-4d74-ac81-e84600f0835e" (UID: "be49b904-0667-4d74-ac81-e84600f0835e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.518612 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7422509-bd52-437b-9459-9c715c66fc33-client-ca\") pod \"d7422509-bd52-437b-9459-9c715c66fc33\" (UID: \"d7422509-bd52-437b-9459-9c715c66fc33\") " Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.519127 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/619056a7-dcfd-4038-a060-219937115302-serving-cert\") pod \"route-controller-manager-677b48c9fc-4n4h2\" (UID: \"619056a7-dcfd-4038-a060-219937115302\") " pod="openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.519315 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7422509-bd52-437b-9459-9c715c66fc33-client-ca" (OuterVolumeSpecName: "client-ca") pod "d7422509-bd52-437b-9459-9c715c66fc33" (UID: "d7422509-bd52-437b-9459-9c715c66fc33"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.519358 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7422509-bd52-437b-9459-9c715c66fc33-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d7422509-bd52-437b-9459-9c715c66fc33" (UID: "d7422509-bd52-437b-9459-9c715c66fc33"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.519506 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7422509-bd52-437b-9459-9c715c66fc33-config" (OuterVolumeSpecName: "config") pod "d7422509-bd52-437b-9459-9c715c66fc33" (UID: "d7422509-bd52-437b-9459-9c715c66fc33"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.519562 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/619056a7-dcfd-4038-a060-219937115302-config\") pod \"route-controller-manager-677b48c9fc-4n4h2\" (UID: \"619056a7-dcfd-4038-a060-219937115302\") " pod="openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.519677 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/619056a7-dcfd-4038-a060-219937115302-client-ca\") pod \"route-controller-manager-677b48c9fc-4n4h2\" (UID: \"619056a7-dcfd-4038-a060-219937115302\") " pod="openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.519718 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckj9c\" (UniqueName: \"kubernetes.io/projected/619056a7-dcfd-4038-a060-219937115302-kube-api-access-ckj9c\") pod \"route-controller-manager-677b48c9fc-4n4h2\" (UID: \"619056a7-dcfd-4038-a060-219937115302\") " pod="openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.519859 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be49b904-0667-4d74-ac81-e84600f0835e-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.519876 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7422509-bd52-437b-9459-9c715c66fc33-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.520177 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be49b904-0667-4d74-ac81-e84600f0835e-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.520188 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7422509-bd52-437b-9459-9c715c66fc33-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.520199 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7422509-bd52-437b-9459-9c715c66fc33-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.523665 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be49b904-0667-4d74-ac81-e84600f0835e-kube-api-access-dgkk5" (OuterVolumeSpecName: "kube-api-access-dgkk5") pod "be49b904-0667-4d74-ac81-e84600f0835e" (UID: "be49b904-0667-4d74-ac81-e84600f0835e"). InnerVolumeSpecName "kube-api-access-dgkk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.523705 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7422509-bd52-437b-9459-9c715c66fc33-kube-api-access-2755g" (OuterVolumeSpecName: "kube-api-access-2755g") pod "d7422509-bd52-437b-9459-9c715c66fc33" (UID: "d7422509-bd52-437b-9459-9c715c66fc33"). InnerVolumeSpecName "kube-api-access-2755g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.525657 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7422509-bd52-437b-9459-9c715c66fc33-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7422509-bd52-437b-9459-9c715c66fc33" (UID: "d7422509-bd52-437b-9459-9c715c66fc33"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.526390 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be49b904-0667-4d74-ac81-e84600f0835e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "be49b904-0667-4d74-ac81-e84600f0835e" (UID: "be49b904-0667-4d74-ac81-e84600f0835e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.621815 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/619056a7-dcfd-4038-a060-219937115302-config\") pod \"route-controller-manager-677b48c9fc-4n4h2\" (UID: \"619056a7-dcfd-4038-a060-219937115302\") " pod="openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.621877 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/619056a7-dcfd-4038-a060-219937115302-client-ca\") pod \"route-controller-manager-677b48c9fc-4n4h2\" (UID: \"619056a7-dcfd-4038-a060-219937115302\") " pod="openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.621897 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckj9c\" (UniqueName: \"kubernetes.io/projected/619056a7-dcfd-4038-a060-219937115302-kube-api-access-ckj9c\") pod \"route-controller-manager-677b48c9fc-4n4h2\" (UID: \"619056a7-dcfd-4038-a060-219937115302\") " pod="openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.621951 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/619056a7-dcfd-4038-a060-219937115302-serving-cert\") pod \"route-controller-manager-677b48c9fc-4n4h2\" (UID: \"619056a7-dcfd-4038-a060-219937115302\") " pod="openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.622023 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7422509-bd52-437b-9459-9c715c66fc33-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.622033 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be49b904-0667-4d74-ac81-e84600f0835e-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.622043 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgkk5\" (UniqueName: \"kubernetes.io/projected/be49b904-0667-4d74-ac81-e84600f0835e-kube-api-access-dgkk5\") on node \"crc\" DevicePath \"\"" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.622055 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2755g\" (UniqueName: \"kubernetes.io/projected/d7422509-bd52-437b-9459-9c715c66fc33-kube-api-access-2755g\") on node \"crc\" DevicePath \"\"" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.623088 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/619056a7-dcfd-4038-a060-219937115302-config\") pod \"route-controller-manager-677b48c9fc-4n4h2\" (UID: \"619056a7-dcfd-4038-a060-219937115302\") " pod="openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.623090 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/619056a7-dcfd-4038-a060-219937115302-client-ca\") pod \"route-controller-manager-677b48c9fc-4n4h2\" (UID: \"619056a7-dcfd-4038-a060-219937115302\") " pod="openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.627046 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/619056a7-dcfd-4038-a060-219937115302-serving-cert\") pod \"route-controller-manager-677b48c9fc-4n4h2\" (UID: \"619056a7-dcfd-4038-a060-219937115302\") " pod="openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.637503 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckj9c\" (UniqueName: \"kubernetes.io/projected/619056a7-dcfd-4038-a060-219937115302-kube-api-access-ckj9c\") pod \"route-controller-manager-677b48c9fc-4n4h2\" (UID: \"619056a7-dcfd-4038-a060-219937115302\") " pod="openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2" Mar 20 15:43:21 crc kubenswrapper[4730]: I0320 15:43:21.785660 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2" Mar 20 15:43:22 crc kubenswrapper[4730]: I0320 15:43:22.114222 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9" Mar 20 15:43:22 crc kubenswrapper[4730]: I0320 15:43:22.114211 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9" event={"ID":"be49b904-0667-4d74-ac81-e84600f0835e","Type":"ContainerDied","Data":"e460ec37f646594e2bfbe40ea47d0b74f2dd0eede1593eeb04f2a323e4f35cf9"} Mar 20 15:43:22 crc kubenswrapper[4730]: I0320 15:43:22.114369 4730 scope.go:117] "RemoveContainer" containerID="067f0501bb28ee46c4dab9d0b265af305bd5580b555aabc3f22ccc19201445e0" Mar 20 15:43:22 crc kubenswrapper[4730]: I0320 15:43:22.117734 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" event={"ID":"d7422509-bd52-437b-9459-9c715c66fc33","Type":"ContainerDied","Data":"c34933180d93ae0c413fe4e3ba797162b90bde253f6503dcbf0ea4405517bfca"} Mar 20 15:43:22 crc kubenswrapper[4730]: I0320 15:43:22.118137 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59447fbd49-wdtl4" Mar 20 15:43:22 crc kubenswrapper[4730]: I0320 15:43:22.151954 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59447fbd49-wdtl4"] Mar 20 15:43:22 crc kubenswrapper[4730]: I0320 15:43:22.154969 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-59447fbd49-wdtl4"] Mar 20 15:43:22 crc kubenswrapper[4730]: I0320 15:43:22.182443 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9"] Mar 20 15:43:22 crc kubenswrapper[4730]: I0320 15:43:22.184911 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58875cfd6f-xthh9"] Mar 20 15:43:22 crc kubenswrapper[4730]: I0320 15:43:22.386405 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 20 15:43:22 crc kubenswrapper[4730]: I0320 15:43:22.387204 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 15:43:22 crc kubenswrapper[4730]: I0320 15:43:22.390598 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 20 15:43:22 crc kubenswrapper[4730]: I0320 15:43:22.390979 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 20 15:43:22 crc kubenswrapper[4730]: I0320 15:43:22.393979 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 20 15:43:22 crc kubenswrapper[4730]: I0320 15:43:22.432744 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78bff99a-9296-41fe-ac5d-b41a183e2349-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"78bff99a-9296-41fe-ac5d-b41a183e2349\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 15:43:22 crc kubenswrapper[4730]: I0320 15:43:22.432798 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78bff99a-9296-41fe-ac5d-b41a183e2349-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"78bff99a-9296-41fe-ac5d-b41a183e2349\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 15:43:22 crc kubenswrapper[4730]: I0320 15:43:22.533888 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78bff99a-9296-41fe-ac5d-b41a183e2349-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"78bff99a-9296-41fe-ac5d-b41a183e2349\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 15:43:22 crc kubenswrapper[4730]: I0320 15:43:22.533934 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78bff99a-9296-41fe-ac5d-b41a183e2349-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"78bff99a-9296-41fe-ac5d-b41a183e2349\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 15:43:22 crc kubenswrapper[4730]: I0320 15:43:22.534020 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78bff99a-9296-41fe-ac5d-b41a183e2349-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"78bff99a-9296-41fe-ac5d-b41a183e2349\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 15:43:22 crc kubenswrapper[4730]: I0320 15:43:22.549603 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78bff99a-9296-41fe-ac5d-b41a183e2349-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"78bff99a-9296-41fe-ac5d-b41a183e2349\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 15:43:22 crc kubenswrapper[4730]: I0320 15:43:22.718223 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 15:43:23 crc kubenswrapper[4730]: I0320 15:43:23.542795 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be49b904-0667-4d74-ac81-e84600f0835e" path="/var/lib/kubelet/pods/be49b904-0667-4d74-ac81-e84600f0835e/volumes" Mar 20 15:43:23 crc kubenswrapper[4730]: I0320 15:43:23.544143 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7422509-bd52-437b-9459-9c715c66fc33" path="/var/lib/kubelet/pods/d7422509-bd52-437b-9459-9c715c66fc33/volumes" Mar 20 15:43:23 crc kubenswrapper[4730]: I0320 15:43:23.940887 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77874967bc-nkmqc"] Mar 20 15:43:23 crc kubenswrapper[4730]: I0320 15:43:23.941999 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77874967bc-nkmqc" Mar 20 15:43:23 crc kubenswrapper[4730]: I0320 15:43:23.944107 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 20 15:43:23 crc kubenswrapper[4730]: I0320 15:43:23.944146 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 20 15:43:23 crc kubenswrapper[4730]: I0320 15:43:23.946925 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 20 15:43:23 crc kubenswrapper[4730]: I0320 15:43:23.947457 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 20 15:43:23 crc kubenswrapper[4730]: I0320 15:43:23.947479 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 20 15:43:23 crc kubenswrapper[4730]: I0320 15:43:23.948530 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 20 15:43:23 crc kubenswrapper[4730]: I0320 15:43:23.958788 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77874967bc-nkmqc"] Mar 20 15:43:23 crc kubenswrapper[4730]: I0320 15:43:23.965111 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 20 15:43:23 crc kubenswrapper[4730]: I0320 15:43:23.985236 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e894fac3-fa5e-4281-9765-30dea46c6b32-config\") pod \"controller-manager-77874967bc-nkmqc\" (UID: \"e894fac3-fa5e-4281-9765-30dea46c6b32\") " pod="openshift-controller-manager/controller-manager-77874967bc-nkmqc" Mar 20 15:43:23 crc kubenswrapper[4730]: I0320 15:43:23.985325 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e894fac3-fa5e-4281-9765-30dea46c6b32-serving-cert\") pod \"controller-manager-77874967bc-nkmqc\" (UID: \"e894fac3-fa5e-4281-9765-30dea46c6b32\") " pod="openshift-controller-manager/controller-manager-77874967bc-nkmqc" Mar 20 15:43:23 crc kubenswrapper[4730]: I0320 15:43:23.985371 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e894fac3-fa5e-4281-9765-30dea46c6b32-client-ca\") pod \"controller-manager-77874967bc-nkmqc\" (UID: \"e894fac3-fa5e-4281-9765-30dea46c6b32\") " pod="openshift-controller-manager/controller-manager-77874967bc-nkmqc" Mar 20 15:43:23 crc kubenswrapper[4730]: I0320 15:43:23.985393 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdv6d\" (UniqueName: \"kubernetes.io/projected/e894fac3-fa5e-4281-9765-30dea46c6b32-kube-api-access-jdv6d\") pod \"controller-manager-77874967bc-nkmqc\" (UID: \"e894fac3-fa5e-4281-9765-30dea46c6b32\") " pod="openshift-controller-manager/controller-manager-77874967bc-nkmqc" Mar 20 15:43:23 crc kubenswrapper[4730]: I0320 15:43:23.985417 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e894fac3-fa5e-4281-9765-30dea46c6b32-proxy-ca-bundles\") pod \"controller-manager-77874967bc-nkmqc\" (UID: \"e894fac3-fa5e-4281-9765-30dea46c6b32\") " pod="openshift-controller-manager/controller-manager-77874967bc-nkmqc" Mar 20 15:43:24 crc kubenswrapper[4730]: I0320 15:43:24.087285 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e894fac3-fa5e-4281-9765-30dea46c6b32-config\") pod \"controller-manager-77874967bc-nkmqc\" (UID: \"e894fac3-fa5e-4281-9765-30dea46c6b32\") " pod="openshift-controller-manager/controller-manager-77874967bc-nkmqc" Mar 20 15:43:24 crc kubenswrapper[4730]: I0320 15:43:24.087389 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e894fac3-fa5e-4281-9765-30dea46c6b32-serving-cert\") pod \"controller-manager-77874967bc-nkmqc\" (UID: \"e894fac3-fa5e-4281-9765-30dea46c6b32\") " pod="openshift-controller-manager/controller-manager-77874967bc-nkmqc" Mar 20 15:43:24 crc kubenswrapper[4730]: I0320 15:43:24.087477 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e894fac3-fa5e-4281-9765-30dea46c6b32-client-ca\") pod \"controller-manager-77874967bc-nkmqc\" (UID: \"e894fac3-fa5e-4281-9765-30dea46c6b32\") " pod="openshift-controller-manager/controller-manager-77874967bc-nkmqc" Mar 20 15:43:24 crc kubenswrapper[4730]: I0320 15:43:24.087501 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdv6d\" (UniqueName: \"kubernetes.io/projected/e894fac3-fa5e-4281-9765-30dea46c6b32-kube-api-access-jdv6d\") pod \"controller-manager-77874967bc-nkmqc\" (UID: \"e894fac3-fa5e-4281-9765-30dea46c6b32\") " pod="openshift-controller-manager/controller-manager-77874967bc-nkmqc" Mar 20 15:43:24 crc kubenswrapper[4730]: I0320 15:43:24.087535 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e894fac3-fa5e-4281-9765-30dea46c6b32-proxy-ca-bundles\") pod \"controller-manager-77874967bc-nkmqc\" (UID: \"e894fac3-fa5e-4281-9765-30dea46c6b32\") " pod="openshift-controller-manager/controller-manager-77874967bc-nkmqc" Mar 20 15:43:24 crc kubenswrapper[4730]: I0320 15:43:24.127923 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77874967bc-nkmqc"] Mar 20 15:43:24 crc kubenswrapper[4730]: E0320 15:43:24.129020 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-jdv6d proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-77874967bc-nkmqc" podUID="e894fac3-fa5e-4281-9765-30dea46c6b32" Mar 20 15:43:24 crc kubenswrapper[4730]: I0320 15:43:24.167170 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e894fac3-fa5e-4281-9765-30dea46c6b32-client-ca\") pod \"controller-manager-77874967bc-nkmqc\" (UID: \"e894fac3-fa5e-4281-9765-30dea46c6b32\") " pod="openshift-controller-manager/controller-manager-77874967bc-nkmqc" Mar 20 15:43:24 crc kubenswrapper[4730]: I0320 15:43:24.167233 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e894fac3-fa5e-4281-9765-30dea46c6b32-config\") pod \"controller-manager-77874967bc-nkmqc\" (UID: \"e894fac3-fa5e-4281-9765-30dea46c6b32\") " pod="openshift-controller-manager/controller-manager-77874967bc-nkmqc" Mar 20 15:43:24 crc kubenswrapper[4730]: I0320 15:43:24.168290 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e894fac3-fa5e-4281-9765-30dea46c6b32-proxy-ca-bundles\") pod \"controller-manager-77874967bc-nkmqc\" (UID: \"e894fac3-fa5e-4281-9765-30dea46c6b32\") " pod="openshift-controller-manager/controller-manager-77874967bc-nkmqc" Mar 20 15:43:24 crc kubenswrapper[4730]: I0320 15:43:24.172909 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdv6d\" (UniqueName: \"kubernetes.io/projected/e894fac3-fa5e-4281-9765-30dea46c6b32-kube-api-access-jdv6d\") pod \"controller-manager-77874967bc-nkmqc\" (UID: \"e894fac3-fa5e-4281-9765-30dea46c6b32\") " pod="openshift-controller-manager/controller-manager-77874967bc-nkmqc" Mar 20 15:43:24 crc kubenswrapper[4730]: I0320 15:43:24.184075 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e894fac3-fa5e-4281-9765-30dea46c6b32-serving-cert\") pod \"controller-manager-77874967bc-nkmqc\" (UID: \"e894fac3-fa5e-4281-9765-30dea46c6b32\") " pod="openshift-controller-manager/controller-manager-77874967bc-nkmqc" Mar 20 15:43:24 crc kubenswrapper[4730]: E0320 15:43:24.221813 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-qmxvf" podUID="ab6c90a0-1bc1-476d-8526-d1fe438163e3" Mar 20 15:43:24 crc kubenswrapper[4730]: E0320 15:43:24.222151 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8rptq" podUID="558b00fd-2589-4842-8cba-db0cffe8c826" Mar 20 15:43:24 crc kubenswrapper[4730]: I0320 15:43:24.236574 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2"] Mar 20 15:43:24 crc kubenswrapper[4730]: E0320 15:43:24.312467 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 20 15:43:24 crc kubenswrapper[4730]: E0320 15:43:24.312700 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v5sxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-cx74p_openshift-marketplace(7a118148-49cc-4b61-bb43-44e3ef2c3048): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 20 15:43:24 crc kubenswrapper[4730]: E0320 15:43:24.314342 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-cx74p" podUID="7a118148-49cc-4b61-bb43-44e3ef2c3048" Mar 20 15:43:25 crc kubenswrapper[4730]: I0320 15:43:25.139012 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77874967bc-nkmqc" Mar 20 15:43:25 crc kubenswrapper[4730]: I0320 15:43:25.150858 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77874967bc-nkmqc" Mar 20 15:43:25 crc kubenswrapper[4730]: I0320 15:43:25.202040 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e894fac3-fa5e-4281-9765-30dea46c6b32-proxy-ca-bundles\") pod \"e894fac3-fa5e-4281-9765-30dea46c6b32\" (UID: \"e894fac3-fa5e-4281-9765-30dea46c6b32\") " Mar 20 15:43:25 crc kubenswrapper[4730]: I0320 15:43:25.202100 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdv6d\" (UniqueName: \"kubernetes.io/projected/e894fac3-fa5e-4281-9765-30dea46c6b32-kube-api-access-jdv6d\") pod \"e894fac3-fa5e-4281-9765-30dea46c6b32\" (UID: \"e894fac3-fa5e-4281-9765-30dea46c6b32\") " Mar 20 15:43:25 crc kubenswrapper[4730]: I0320 15:43:25.202167 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e894fac3-fa5e-4281-9765-30dea46c6b32-client-ca\") pod \"e894fac3-fa5e-4281-9765-30dea46c6b32\" (UID: \"e894fac3-fa5e-4281-9765-30dea46c6b32\") " Mar 20 15:43:25 crc kubenswrapper[4730]: I0320 15:43:25.202213 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e894fac3-fa5e-4281-9765-30dea46c6b32-config\") pod \"e894fac3-fa5e-4281-9765-30dea46c6b32\" (UID: \"e894fac3-fa5e-4281-9765-30dea46c6b32\") " Mar 20 15:43:25 crc kubenswrapper[4730]: I0320 15:43:25.202264 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e894fac3-fa5e-4281-9765-30dea46c6b32-serving-cert\") pod \"e894fac3-fa5e-4281-9765-30dea46c6b32\" (UID: \"e894fac3-fa5e-4281-9765-30dea46c6b32\") " Mar 20 15:43:25 crc kubenswrapper[4730]: I0320 15:43:25.202696 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e894fac3-fa5e-4281-9765-30dea46c6b32-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e894fac3-fa5e-4281-9765-30dea46c6b32" (UID: "e894fac3-fa5e-4281-9765-30dea46c6b32"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:43:25 crc kubenswrapper[4730]: I0320 15:43:25.202885 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e894fac3-fa5e-4281-9765-30dea46c6b32-config" (OuterVolumeSpecName: "config") pod "e894fac3-fa5e-4281-9765-30dea46c6b32" (UID: "e894fac3-fa5e-4281-9765-30dea46c6b32"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:43:25 crc kubenswrapper[4730]: I0320 15:43:25.203322 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e894fac3-fa5e-4281-9765-30dea46c6b32-client-ca" (OuterVolumeSpecName: "client-ca") pod "e894fac3-fa5e-4281-9765-30dea46c6b32" (UID: "e894fac3-fa5e-4281-9765-30dea46c6b32"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:43:25 crc kubenswrapper[4730]: I0320 15:43:25.214939 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e894fac3-fa5e-4281-9765-30dea46c6b32-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e894fac3-fa5e-4281-9765-30dea46c6b32" (UID: "e894fac3-fa5e-4281-9765-30dea46c6b32"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:43:25 crc kubenswrapper[4730]: I0320 15:43:25.230271 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e894fac3-fa5e-4281-9765-30dea46c6b32-kube-api-access-jdv6d" (OuterVolumeSpecName: "kube-api-access-jdv6d") pod "e894fac3-fa5e-4281-9765-30dea46c6b32" (UID: "e894fac3-fa5e-4281-9765-30dea46c6b32"). InnerVolumeSpecName "kube-api-access-jdv6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:43:25 crc kubenswrapper[4730]: I0320 15:43:25.303705 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e894fac3-fa5e-4281-9765-30dea46c6b32-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:43:25 crc kubenswrapper[4730]: I0320 15:43:25.303738 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e894fac3-fa5e-4281-9765-30dea46c6b32-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:43:25 crc kubenswrapper[4730]: I0320 15:43:25.303746 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e894fac3-fa5e-4281-9765-30dea46c6b32-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:43:25 crc kubenswrapper[4730]: I0320 15:43:25.303754 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e894fac3-fa5e-4281-9765-30dea46c6b32-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 20 15:43:25 crc kubenswrapper[4730]: I0320 15:43:25.303764 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdv6d\" (UniqueName: \"kubernetes.io/projected/e894fac3-fa5e-4281-9765-30dea46c6b32-kube-api-access-jdv6d\") on node \"crc\" DevicePath \"\"" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.145293 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77874967bc-nkmqc" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.206936 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f56868448-2fbxh"] Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.208300 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.210582 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.210723 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.214135 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.215088 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77874967bc-nkmqc"] Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.215636 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.215722 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.217695 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.218994 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-77874967bc-nkmqc"] Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.222293 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.225925 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f56868448-2fbxh"] Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.318824 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d747680-5dde-4793-863a-252a5f67233a-client-ca\") pod \"controller-manager-6f56868448-2fbxh\" (UID: \"9d747680-5dde-4793-863a-252a5f67233a\") " pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.318883 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68xrx\" (UniqueName: \"kubernetes.io/projected/9d747680-5dde-4793-863a-252a5f67233a-kube-api-access-68xrx\") pod \"controller-manager-6f56868448-2fbxh\" (UID: \"9d747680-5dde-4793-863a-252a5f67233a\") " pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.318934 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d747680-5dde-4793-863a-252a5f67233a-proxy-ca-bundles\") pod \"controller-manager-6f56868448-2fbxh\" (UID: \"9d747680-5dde-4793-863a-252a5f67233a\") " pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.318952 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d747680-5dde-4793-863a-252a5f67233a-serving-cert\") pod \"controller-manager-6f56868448-2fbxh\" (UID: \"9d747680-5dde-4793-863a-252a5f67233a\") " pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.318976 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d747680-5dde-4793-863a-252a5f67233a-config\") pod \"controller-manager-6f56868448-2fbxh\" (UID: \"9d747680-5dde-4793-863a-252a5f67233a\") " pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" Mar 20 15:43:26 crc kubenswrapper[4730]: E0320 15:43:26.378497 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-cx74p" podUID="7a118148-49cc-4b61-bb43-44e3ef2c3048" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.421394 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68xrx\" (UniqueName: \"kubernetes.io/projected/9d747680-5dde-4793-863a-252a5f67233a-kube-api-access-68xrx\") pod \"controller-manager-6f56868448-2fbxh\" (UID: \"9d747680-5dde-4793-863a-252a5f67233a\") " pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.421508 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d747680-5dde-4793-863a-252a5f67233a-proxy-ca-bundles\") pod \"controller-manager-6f56868448-2fbxh\" (UID: \"9d747680-5dde-4793-863a-252a5f67233a\") " pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.421539 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d747680-5dde-4793-863a-252a5f67233a-serving-cert\") pod \"controller-manager-6f56868448-2fbxh\" (UID: \"9d747680-5dde-4793-863a-252a5f67233a\") " pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.421575 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d747680-5dde-4793-863a-252a5f67233a-config\") pod \"controller-manager-6f56868448-2fbxh\" (UID: \"9d747680-5dde-4793-863a-252a5f67233a\") " pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.421673 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d747680-5dde-4793-863a-252a5f67233a-client-ca\") pod \"controller-manager-6f56868448-2fbxh\" (UID: \"9d747680-5dde-4793-863a-252a5f67233a\") " pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.423383 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d747680-5dde-4793-863a-252a5f67233a-client-ca\") pod \"controller-manager-6f56868448-2fbxh\" (UID: \"9d747680-5dde-4793-863a-252a5f67233a\") " pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.425007 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d747680-5dde-4793-863a-252a5f67233a-config\") pod \"controller-manager-6f56868448-2fbxh\" (UID: \"9d747680-5dde-4793-863a-252a5f67233a\") " pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.426434 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d747680-5dde-4793-863a-252a5f67233a-proxy-ca-bundles\") pod \"controller-manager-6f56868448-2fbxh\" (UID: \"9d747680-5dde-4793-863a-252a5f67233a\") " pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.439049 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d747680-5dde-4793-863a-252a5f67233a-serving-cert\") pod \"controller-manager-6f56868448-2fbxh\" (UID: \"9d747680-5dde-4793-863a-252a5f67233a\") " pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.443987 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68xrx\" (UniqueName: \"kubernetes.io/projected/9d747680-5dde-4793-863a-252a5f67233a-kube-api-access-68xrx\") pod \"controller-manager-6f56868448-2fbxh\" (UID: \"9d747680-5dde-4793-863a-252a5f67233a\") " pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" Mar 20 15:43:26 crc kubenswrapper[4730]: E0320 15:43:26.469262 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 20 15:43:26 crc kubenswrapper[4730]: E0320 15:43:26.469527 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwqvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6mppz_openshift-marketplace(168c4cbd-3a44-48a5-be95-0eb4ea01d6c8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 20 15:43:26 crc kubenswrapper[4730]: E0320 15:43:26.470757 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-6mppz" podUID="168c4cbd-3a44-48a5-be95-0eb4ea01d6c8" Mar 20 15:43:26 crc kubenswrapper[4730]: E0320 15:43:26.474488 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 20 15:43:26 crc kubenswrapper[4730]: E0320 15:43:26.474696 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rd6js,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mbtfk_openshift-marketplace(d5addb8e-1dbc-41a2-8330-8a97251bd52f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 20 15:43:26 crc kubenswrapper[4730]: E0320 15:43:26.475918 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-mbtfk" podUID="d5addb8e-1dbc-41a2-8330-8a97251bd52f" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.542183 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.578956 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.579643 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.591670 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.624674 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e48519c7-0cdc-419b-bd72-2bab0e911af8-var-lock\") pod \"installer-9-crc\" (UID: \"e48519c7-0cdc-419b-bd72-2bab0e911af8\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.624768 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e48519c7-0cdc-419b-bd72-2bab0e911af8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e48519c7-0cdc-419b-bd72-2bab0e911af8\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.624944 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e48519c7-0cdc-419b-bd72-2bab0e911af8-kube-api-access\") pod \"installer-9-crc\" (UID: \"e48519c7-0cdc-419b-bd72-2bab0e911af8\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.726309 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e48519c7-0cdc-419b-bd72-2bab0e911af8-var-lock\") pod \"installer-9-crc\" (UID: \"e48519c7-0cdc-419b-bd72-2bab0e911af8\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.726397 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e48519c7-0cdc-419b-bd72-2bab0e911af8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e48519c7-0cdc-419b-bd72-2bab0e911af8\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.726424 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e48519c7-0cdc-419b-bd72-2bab0e911af8-kube-api-access\") pod \"installer-9-crc\" (UID: \"e48519c7-0cdc-419b-bd72-2bab0e911af8\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.726889 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e48519c7-0cdc-419b-bd72-2bab0e911af8-var-lock\") pod \"installer-9-crc\" (UID: \"e48519c7-0cdc-419b-bd72-2bab0e911af8\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.726890 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e48519c7-0cdc-419b-bd72-2bab0e911af8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e48519c7-0cdc-419b-bd72-2bab0e911af8\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.743871 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e48519c7-0cdc-419b-bd72-2bab0e911af8-kube-api-access\") pod \"installer-9-crc\" (UID: \"e48519c7-0cdc-419b-bd72-2bab0e911af8\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 20 15:43:26 crc kubenswrapper[4730]: I0320 15:43:26.908625 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 20 15:43:27 crc kubenswrapper[4730]: I0320 15:43:27.540729 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e894fac3-fa5e-4281-9765-30dea46c6b32" path="/var/lib/kubelet/pods/e894fac3-fa5e-4281-9765-30dea46c6b32/volumes" Mar 20 15:43:27 crc kubenswrapper[4730]: E0320 15:43:27.924625 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6mppz" podUID="168c4cbd-3a44-48a5-be95-0eb4ea01d6c8" Mar 20 15:43:27 crc kubenswrapper[4730]: E0320 15:43:27.924803 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-mbtfk" podUID="d5addb8e-1dbc-41a2-8330-8a97251bd52f" Mar 20 15:43:27 crc kubenswrapper[4730]: I0320 15:43:27.957736 4730 scope.go:117] "RemoveContainer" containerID="97c7c0bbae68297bff265d2fc691b5c0a2a1e2f15cfbe251cb407d9a746b7e48" Mar 20 15:43:28 crc kubenswrapper[4730]: E0320 15:43:28.006992 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 20 15:43:28 crc kubenswrapper[4730]: E0320 15:43:28.007545 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hs46z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-flpw2_openshift-marketplace(5a347883-e4f7-4fcd-8920-59519533cf43): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 20 15:43:28 crc kubenswrapper[4730]: E0320 15:43:28.008914 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-flpw2" podUID="5a347883-e4f7-4fcd-8920-59519533cf43" Mar 20 15:43:28 crc kubenswrapper[4730]: E0320 15:43:28.050228 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 20 15:43:28 crc kubenswrapper[4730]: E0320 15:43:28.050528 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pmf5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2z2hv_openshift-marketplace(715cbff8-9674-4896-8deb-54a6e9a8899e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 20 15:43:28 crc kubenswrapper[4730]: E0320 15:43:28.051772 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-2z2hv" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" Mar 20 15:43:28 crc kubenswrapper[4730]: E0320 15:43:28.072993 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 20 15:43:28 crc kubenswrapper[4730]: E0320 15:43:28.073267 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rgmhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rlnqc_openshift-marketplace(e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 20 15:43:28 crc kubenswrapper[4730]: E0320 15:43:28.074685 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-rlnqc" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" Mar 20 15:43:28 crc kubenswrapper[4730]: E0320 15:43:28.171990 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-flpw2" podUID="5a347883-e4f7-4fcd-8920-59519533cf43" Mar 20 15:43:28 crc kubenswrapper[4730]: E0320 15:43:28.172593 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rlnqc" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" Mar 20 15:43:28 crc kubenswrapper[4730]: E0320 15:43:28.172640 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2z2hv" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" Mar 20 15:43:28 crc kubenswrapper[4730]: I0320 15:43:28.290609 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 20 15:43:28 crc kubenswrapper[4730]: I0320 15:43:28.390658 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 20 15:43:28 crc kubenswrapper[4730]: I0320 15:43:28.440913 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f56868448-2fbxh"] Mar 20 15:43:28 crc kubenswrapper[4730]: I0320 15:43:28.444117 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2"] Mar 20 15:43:28 crc kubenswrapper[4730]: W0320 15:43:28.456917 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod619056a7_dcfd_4038_a060_219937115302.slice/crio-5329a4a78efd925df61b8a787b8f9a45a1ac33c3586857f71eed6f7475be4589 WatchSource:0}: Error finding container 5329a4a78efd925df61b8a787b8f9a45a1ac33c3586857f71eed6f7475be4589: Status 404 returned error can't find the container with id 5329a4a78efd925df61b8a787b8f9a45a1ac33c3586857f71eed6f7475be4589 Mar 20 15:43:28 crc kubenswrapper[4730]: W0320 15:43:28.463758 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d747680_5dde_4793_863a_252a5f67233a.slice/crio-2d0d6a6d61d99c2e38791ba9cd9580b05d3d5ca5c004c4c372aff09d24128220 WatchSource:0}: Error finding container 2d0d6a6d61d99c2e38791ba9cd9580b05d3d5ca5c004c4c372aff09d24128220: Status 404 returned error can't find the container with id 2d0d6a6d61d99c2e38791ba9cd9580b05d3d5ca5c004c4c372aff09d24128220 Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.176003 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e48519c7-0cdc-419b-bd72-2bab0e911af8","Type":"ContainerStarted","Data":"d1983b19ac38ba32d4fa20a02bf50a7e57dd7a9e5c61bb3d5cfddbb58ce8788c"} Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.176482 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e48519c7-0cdc-419b-bd72-2bab0e911af8","Type":"ContainerStarted","Data":"5af4b462745de074ee5968bfbd84bbf7129ced0fcb060ef525aacd425c95e3c1"} Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.177705 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" event={"ID":"9d747680-5dde-4793-863a-252a5f67233a","Type":"ContainerStarted","Data":"6a969344117b0e223a95c25576df0115301c9833e8c4f08723a3caa581f16b8b"} Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.177739 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" event={"ID":"9d747680-5dde-4793-863a-252a5f67233a","Type":"ContainerStarted","Data":"2d0d6a6d61d99c2e38791ba9cd9580b05d3d5ca5c004c4c372aff09d24128220"} Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.178011 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.179257 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"78bff99a-9296-41fe-ac5d-b41a183e2349","Type":"ContainerStarted","Data":"5d7f9de408c9d5d877667c8e376ecf1e8e460670d7d56cd105eae027fc2488bf"} Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.179307 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"78bff99a-9296-41fe-ac5d-b41a183e2349","Type":"ContainerStarted","Data":"41294ad67a411ffc3356abec6adfd499182ee81671d1eae6d8ef8df30b5e38f9"} Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.180838 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2" event={"ID":"619056a7-dcfd-4038-a060-219937115302","Type":"ContainerStarted","Data":"f597a41156cb37b583c222dbf5b174ede3554dd0f42eebcfaae39f06d1dc9a19"} Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.180868 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2" event={"ID":"619056a7-dcfd-4038-a060-219937115302","Type":"ContainerStarted","Data":"5329a4a78efd925df61b8a787b8f9a45a1ac33c3586857f71eed6f7475be4589"} Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.180947 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2" podUID="619056a7-dcfd-4038-a060-219937115302" containerName="route-controller-manager" containerID="cri-o://f597a41156cb37b583c222dbf5b174ede3554dd0f42eebcfaae39f06d1dc9a19" gracePeriod=30 Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.182184 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.183416 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.185835 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.190687 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.190672167 podStartE2EDuration="3.190672167s" podCreationTimestamp="2026-03-20 15:43:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:43:29.189042247 +0000 UTC m=+268.402413616" watchObservedRunningTime="2026-03-20 15:43:29.190672167 +0000 UTC m=+268.404043536" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.230083 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2" podStartSLOduration=25.230060956 podStartE2EDuration="25.230060956s" podCreationTimestamp="2026-03-20 15:43:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:43:29.212523548 +0000 UTC m=+268.425894917" watchObservedRunningTime="2026-03-20 15:43:29.230060956 +0000 UTC m=+268.443432335" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.232517 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" podStartSLOduration=5.232505241 podStartE2EDuration="5.232505241s" podCreationTimestamp="2026-03-20 15:43:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:43:29.231551682 +0000 UTC m=+268.444923071" watchObservedRunningTime="2026-03-20 15:43:29.232505241 +0000 UTC m=+268.445876610" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.254059 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=7.254038542 podStartE2EDuration="7.254038542s" podCreationTimestamp="2026-03-20 15:43:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:43:29.250353849 +0000 UTC m=+268.463725228" watchObservedRunningTime="2026-03-20 15:43:29.254038542 +0000 UTC m=+268.467409911" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.512712 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.545764 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2"] Mar 20 15:43:29 crc kubenswrapper[4730]: E0320 15:43:29.546078 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619056a7-dcfd-4038-a060-219937115302" containerName="route-controller-manager" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.546092 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="619056a7-dcfd-4038-a060-219937115302" containerName="route-controller-manager" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.546349 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="619056a7-dcfd-4038-a060-219937115302" containerName="route-controller-manager" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.547181 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.552467 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2"] Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.569935 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/619056a7-dcfd-4038-a060-219937115302-client-ca\") pod \"619056a7-dcfd-4038-a060-219937115302\" (UID: \"619056a7-dcfd-4038-a060-219937115302\") " Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.570064 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckj9c\" (UniqueName: \"kubernetes.io/projected/619056a7-dcfd-4038-a060-219937115302-kube-api-access-ckj9c\") pod \"619056a7-dcfd-4038-a060-219937115302\" (UID: \"619056a7-dcfd-4038-a060-219937115302\") " Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.570159 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/619056a7-dcfd-4038-a060-219937115302-config\") pod \"619056a7-dcfd-4038-a060-219937115302\" (UID: \"619056a7-dcfd-4038-a060-219937115302\") " Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.570194 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/619056a7-dcfd-4038-a060-219937115302-serving-cert\") pod \"619056a7-dcfd-4038-a060-219937115302\" (UID: \"619056a7-dcfd-4038-a060-219937115302\") " Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.573283 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/619056a7-dcfd-4038-a060-219937115302-client-ca" (OuterVolumeSpecName: "client-ca") pod "619056a7-dcfd-4038-a060-219937115302" (UID: "619056a7-dcfd-4038-a060-219937115302"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.573294 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/619056a7-dcfd-4038-a060-219937115302-config" (OuterVolumeSpecName: "config") pod "619056a7-dcfd-4038-a060-219937115302" (UID: "619056a7-dcfd-4038-a060-219937115302"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.576576 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/619056a7-dcfd-4038-a060-219937115302-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "619056a7-dcfd-4038-a060-219937115302" (UID: "619056a7-dcfd-4038-a060-219937115302"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.579557 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/619056a7-dcfd-4038-a060-219937115302-kube-api-access-ckj9c" (OuterVolumeSpecName: "kube-api-access-ckj9c") pod "619056a7-dcfd-4038-a060-219937115302" (UID: "619056a7-dcfd-4038-a060-219937115302"). InnerVolumeSpecName "kube-api-access-ckj9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.671522 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d20fab-86cc-44d8-a8b9-c60f6835c5e0-serving-cert\") pod \"route-controller-manager-b75b5f765-8wjw2\" (UID: \"d4d20fab-86cc-44d8-a8b9-c60f6835c5e0\") " pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.671609 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4d20fab-86cc-44d8-a8b9-c60f6835c5e0-client-ca\") pod \"route-controller-manager-b75b5f765-8wjw2\" (UID: \"d4d20fab-86cc-44d8-a8b9-c60f6835c5e0\") " pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.671649 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xdnv\" (UniqueName: \"kubernetes.io/projected/d4d20fab-86cc-44d8-a8b9-c60f6835c5e0-kube-api-access-4xdnv\") pod \"route-controller-manager-b75b5f765-8wjw2\" (UID: \"d4d20fab-86cc-44d8-a8b9-c60f6835c5e0\") " pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.671680 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d20fab-86cc-44d8-a8b9-c60f6835c5e0-config\") pod \"route-controller-manager-b75b5f765-8wjw2\" (UID: \"d4d20fab-86cc-44d8-a8b9-c60f6835c5e0\") " pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.671739 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/619056a7-dcfd-4038-a060-219937115302-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.671749 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/619056a7-dcfd-4038-a060-219937115302-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.671757 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/619056a7-dcfd-4038-a060-219937115302-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.671765 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckj9c\" (UniqueName: \"kubernetes.io/projected/619056a7-dcfd-4038-a060-219937115302-kube-api-access-ckj9c\") on node \"crc\" DevicePath \"\"" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.772885 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d20fab-86cc-44d8-a8b9-c60f6835c5e0-serving-cert\") pod \"route-controller-manager-b75b5f765-8wjw2\" (UID: \"d4d20fab-86cc-44d8-a8b9-c60f6835c5e0\") " pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.773468 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4d20fab-86cc-44d8-a8b9-c60f6835c5e0-client-ca\") pod \"route-controller-manager-b75b5f765-8wjw2\" (UID: \"d4d20fab-86cc-44d8-a8b9-c60f6835c5e0\") " pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.773492 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xdnv\" (UniqueName: \"kubernetes.io/projected/d4d20fab-86cc-44d8-a8b9-c60f6835c5e0-kube-api-access-4xdnv\") pod \"route-controller-manager-b75b5f765-8wjw2\" (UID: \"d4d20fab-86cc-44d8-a8b9-c60f6835c5e0\") " pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.773525 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d20fab-86cc-44d8-a8b9-c60f6835c5e0-config\") pod \"route-controller-manager-b75b5f765-8wjw2\" (UID: \"d4d20fab-86cc-44d8-a8b9-c60f6835c5e0\") " pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.774681 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d20fab-86cc-44d8-a8b9-c60f6835c5e0-config\") pod \"route-controller-manager-b75b5f765-8wjw2\" (UID: \"d4d20fab-86cc-44d8-a8b9-c60f6835c5e0\") " pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.775443 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4d20fab-86cc-44d8-a8b9-c60f6835c5e0-client-ca\") pod \"route-controller-manager-b75b5f765-8wjw2\" (UID: \"d4d20fab-86cc-44d8-a8b9-c60f6835c5e0\") " pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.789891 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d20fab-86cc-44d8-a8b9-c60f6835c5e0-serving-cert\") pod \"route-controller-manager-b75b5f765-8wjw2\" (UID: \"d4d20fab-86cc-44d8-a8b9-c60f6835c5e0\") " pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.796100 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xdnv\" (UniqueName: \"kubernetes.io/projected/d4d20fab-86cc-44d8-a8b9-c60f6835c5e0-kube-api-access-4xdnv\") pod \"route-controller-manager-b75b5f765-8wjw2\" (UID: \"d4d20fab-86cc-44d8-a8b9-c60f6835c5e0\") " pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" Mar 20 15:43:29 crc kubenswrapper[4730]: I0320 15:43:29.866809 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" Mar 20 15:43:30 crc kubenswrapper[4730]: I0320 15:43:30.185601 4730 generic.go:334] "Generic (PLEG): container finished" podID="619056a7-dcfd-4038-a060-219937115302" containerID="f597a41156cb37b583c222dbf5b174ede3554dd0f42eebcfaae39f06d1dc9a19" exitCode=0 Mar 20 15:43:30 crc kubenswrapper[4730]: I0320 15:43:30.185664 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2" event={"ID":"619056a7-dcfd-4038-a060-219937115302","Type":"ContainerDied","Data":"f597a41156cb37b583c222dbf5b174ede3554dd0f42eebcfaae39f06d1dc9a19"} Mar 20 15:43:30 crc kubenswrapper[4730]: I0320 15:43:30.186012 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2" event={"ID":"619056a7-dcfd-4038-a060-219937115302","Type":"ContainerDied","Data":"5329a4a78efd925df61b8a787b8f9a45a1ac33c3586857f71eed6f7475be4589"} Mar 20 15:43:30 crc kubenswrapper[4730]: I0320 15:43:30.186034 4730 scope.go:117] "RemoveContainer" containerID="f597a41156cb37b583c222dbf5b174ede3554dd0f42eebcfaae39f06d1dc9a19" Mar 20 15:43:30 crc kubenswrapper[4730]: I0320 15:43:30.185725 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2" Mar 20 15:43:30 crc kubenswrapper[4730]: I0320 15:43:30.188103 4730 generic.go:334] "Generic (PLEG): container finished" podID="78bff99a-9296-41fe-ac5d-b41a183e2349" containerID="5d7f9de408c9d5d877667c8e376ecf1e8e460670d7d56cd105eae027fc2488bf" exitCode=0 Mar 20 15:43:30 crc kubenswrapper[4730]: I0320 15:43:30.188185 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"78bff99a-9296-41fe-ac5d-b41a183e2349","Type":"ContainerDied","Data":"5d7f9de408c9d5d877667c8e376ecf1e8e460670d7d56cd105eae027fc2488bf"} Mar 20 15:43:30 crc kubenswrapper[4730]: I0320 15:43:30.198956 4730 scope.go:117] "RemoveContainer" containerID="f597a41156cb37b583c222dbf5b174ede3554dd0f42eebcfaae39f06d1dc9a19" Mar 20 15:43:30 crc kubenswrapper[4730]: E0320 15:43:30.199280 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f597a41156cb37b583c222dbf5b174ede3554dd0f42eebcfaae39f06d1dc9a19\": container with ID starting with f597a41156cb37b583c222dbf5b174ede3554dd0f42eebcfaae39f06d1dc9a19 not found: ID does not exist" containerID="f597a41156cb37b583c222dbf5b174ede3554dd0f42eebcfaae39f06d1dc9a19" Mar 20 15:43:30 crc kubenswrapper[4730]: I0320 15:43:30.199332 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f597a41156cb37b583c222dbf5b174ede3554dd0f42eebcfaae39f06d1dc9a19"} err="failed to get container status \"f597a41156cb37b583c222dbf5b174ede3554dd0f42eebcfaae39f06d1dc9a19\": rpc error: code = NotFound desc = could not find container \"f597a41156cb37b583c222dbf5b174ede3554dd0f42eebcfaae39f06d1dc9a19\": container with ID starting with f597a41156cb37b583c222dbf5b174ede3554dd0f42eebcfaae39f06d1dc9a19 not found: ID does not exist" Mar 20 15:43:30 crc kubenswrapper[4730]: I0320 15:43:30.381241 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2"] Mar 20 15:43:30 crc kubenswrapper[4730]: I0320 15:43:30.384617 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677b48c9fc-4n4h2"] Mar 20 15:43:30 crc kubenswrapper[4730]: I0320 15:43:30.464874 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2"] Mar 20 15:43:30 crc kubenswrapper[4730]: W0320 15:43:30.472433 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4d20fab_86cc_44d8_a8b9_c60f6835c5e0.slice/crio-4e99a4704ba10afb385846d96e0db2ec14bcce1391fc5cd9c5fede455f436bf0 WatchSource:0}: Error finding container 4e99a4704ba10afb385846d96e0db2ec14bcce1391fc5cd9c5fede455f436bf0: Status 404 returned error can't find the container with id 4e99a4704ba10afb385846d96e0db2ec14bcce1391fc5cd9c5fede455f436bf0 Mar 20 15:43:31 crc kubenswrapper[4730]: I0320 15:43:31.196229 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" event={"ID":"d4d20fab-86cc-44d8-a8b9-c60f6835c5e0","Type":"ContainerStarted","Data":"34175b3ad56804c8ddffbef2e3fafd18e67e87b94833ec1a23054a68de4fe0be"} Mar 20 15:43:31 crc kubenswrapper[4730]: I0320 15:43:31.196769 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" event={"ID":"d4d20fab-86cc-44d8-a8b9-c60f6835c5e0","Type":"ContainerStarted","Data":"4e99a4704ba10afb385846d96e0db2ec14bcce1391fc5cd9c5fede455f436bf0"} Mar 20 15:43:31 crc kubenswrapper[4730]: I0320 15:43:31.196800 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" Mar 20 15:43:31 crc kubenswrapper[4730]: I0320 15:43:31.202189 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" Mar 20 15:43:31 crc kubenswrapper[4730]: I0320 15:43:31.218505 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" podStartSLOduration=7.218484886 podStartE2EDuration="7.218484886s" podCreationTimestamp="2026-03-20 15:43:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:43:31.214486753 +0000 UTC m=+270.427858142" watchObservedRunningTime="2026-03-20 15:43:31.218484886 +0000 UTC m=+270.431856265" Mar 20 15:43:31 crc kubenswrapper[4730]: I0320 15:43:31.447134 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 15:43:31 crc kubenswrapper[4730]: I0320 15:43:31.496344 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78bff99a-9296-41fe-ac5d-b41a183e2349-kube-api-access\") pod \"78bff99a-9296-41fe-ac5d-b41a183e2349\" (UID: \"78bff99a-9296-41fe-ac5d-b41a183e2349\") " Mar 20 15:43:31 crc kubenswrapper[4730]: I0320 15:43:31.496488 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78bff99a-9296-41fe-ac5d-b41a183e2349-kubelet-dir\") pod \"78bff99a-9296-41fe-ac5d-b41a183e2349\" (UID: \"78bff99a-9296-41fe-ac5d-b41a183e2349\") " Mar 20 15:43:31 crc kubenswrapper[4730]: I0320 15:43:31.496576 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78bff99a-9296-41fe-ac5d-b41a183e2349-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "78bff99a-9296-41fe-ac5d-b41a183e2349" (UID: "78bff99a-9296-41fe-ac5d-b41a183e2349"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:43:31 crc kubenswrapper[4730]: I0320 15:43:31.496849 4730 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78bff99a-9296-41fe-ac5d-b41a183e2349-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 20 15:43:31 crc kubenswrapper[4730]: I0320 15:43:31.505928 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78bff99a-9296-41fe-ac5d-b41a183e2349-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "78bff99a-9296-41fe-ac5d-b41a183e2349" (UID: "78bff99a-9296-41fe-ac5d-b41a183e2349"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:43:31 crc kubenswrapper[4730]: I0320 15:43:31.543801 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="619056a7-dcfd-4038-a060-219937115302" path="/var/lib/kubelet/pods/619056a7-dcfd-4038-a060-219937115302/volumes" Mar 20 15:43:31 crc kubenswrapper[4730]: I0320 15:43:31.597891 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78bff99a-9296-41fe-ac5d-b41a183e2349-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 15:43:32 crc kubenswrapper[4730]: I0320 15:43:32.203932 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 15:43:32 crc kubenswrapper[4730]: I0320 15:43:32.203962 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"78bff99a-9296-41fe-ac5d-b41a183e2349","Type":"ContainerDied","Data":"41294ad67a411ffc3356abec6adfd499182ee81671d1eae6d8ef8df30b5e38f9"} Mar 20 15:43:32 crc kubenswrapper[4730]: I0320 15:43:32.205325 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41294ad67a411ffc3356abec6adfd499182ee81671d1eae6d8ef8df30b5e38f9" Mar 20 15:43:33 crc kubenswrapper[4730]: I0320 15:43:33.204356 4730 csr.go:261] certificate signing request csr-hm6sj is approved, waiting to be issued Mar 20 15:43:33 crc kubenswrapper[4730]: I0320 15:43:33.213782 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567022-wf5nv" event={"ID":"7d87adfe-3206-4175-8d8f-5a00015cc61e","Type":"ContainerStarted","Data":"bb5b04ddf5d3880ba3c77fa4e7069bd85e272160b1890e28a7de00d43e3a9f9e"} Mar 20 15:43:33 crc kubenswrapper[4730]: I0320 15:43:33.215102 4730 csr.go:257] certificate signing request csr-hm6sj is issued Mar 20 15:43:33 crc kubenswrapper[4730]: I0320 15:43:33.230214 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29567022-wf5nv" podStartSLOduration=41.712544319 podStartE2EDuration="1m33.230196289s" podCreationTimestamp="2026-03-20 15:42:00 +0000 UTC" firstStartedPulling="2026-03-20 15:42:41.018688017 +0000 UTC m=+220.232059386" lastFinishedPulling="2026-03-20 15:43:32.536339977 +0000 UTC m=+271.749711356" observedRunningTime="2026-03-20 15:43:33.228412224 +0000 UTC m=+272.441783593" watchObservedRunningTime="2026-03-20 15:43:33.230196289 +0000 UTC m=+272.443567658" Mar 20 15:43:33 crc kubenswrapper[4730]: E0320 15:43:33.260740 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d87adfe_3206_4175_8d8f_5a00015cc61e.slice/crio-bb5b04ddf5d3880ba3c77fa4e7069bd85e272160b1890e28a7de00d43e3a9f9e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d87adfe_3206_4175_8d8f_5a00015cc61e.slice/crio-conmon-bb5b04ddf5d3880ba3c77fa4e7069bd85e272160b1890e28a7de00d43e3a9f9e.scope\": RecentStats: unable to find data in memory cache]" Mar 20 15:43:34 crc kubenswrapper[4730]: I0320 15:43:34.216369 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-28 20:39:48.298982454 +0000 UTC Mar 20 15:43:34 crc kubenswrapper[4730]: I0320 15:43:34.216724 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6796h56m14.082261607s for next certificate rotation Mar 20 15:43:34 crc kubenswrapper[4730]: I0320 15:43:34.435914 4730 generic.go:334] "Generic (PLEG): container finished" podID="7d87adfe-3206-4175-8d8f-5a00015cc61e" containerID="bb5b04ddf5d3880ba3c77fa4e7069bd85e272160b1890e28a7de00d43e3a9f9e" exitCode=0 Mar 20 15:43:34 crc kubenswrapper[4730]: I0320 15:43:34.435962 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567022-wf5nv" event={"ID":"7d87adfe-3206-4175-8d8f-5a00015cc61e","Type":"ContainerDied","Data":"bb5b04ddf5d3880ba3c77fa4e7069bd85e272160b1890e28a7de00d43e3a9f9e"} Mar 20 15:43:35 crc kubenswrapper[4730]: I0320 15:43:35.292924 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-29 07:33:08.724990939 +0000 UTC Mar 20 15:43:35 crc kubenswrapper[4730]: I0320 15:43:35.293004 4730 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6807h49m33.431990817s for next certificate rotation Mar 20 15:43:35 crc kubenswrapper[4730]: I0320 15:43:35.731722 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567022-wf5nv" Mar 20 15:43:35 crc kubenswrapper[4730]: I0320 15:43:35.832912 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46gxc\" (UniqueName: \"kubernetes.io/projected/7d87adfe-3206-4175-8d8f-5a00015cc61e-kube-api-access-46gxc\") pod \"7d87adfe-3206-4175-8d8f-5a00015cc61e\" (UID: \"7d87adfe-3206-4175-8d8f-5a00015cc61e\") " Mar 20 15:43:35 crc kubenswrapper[4730]: I0320 15:43:35.858854 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d87adfe-3206-4175-8d8f-5a00015cc61e-kube-api-access-46gxc" (OuterVolumeSpecName: "kube-api-access-46gxc") pod "7d87adfe-3206-4175-8d8f-5a00015cc61e" (UID: "7d87adfe-3206-4175-8d8f-5a00015cc61e"). InnerVolumeSpecName "kube-api-access-46gxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:43:35 crc kubenswrapper[4730]: I0320 15:43:35.934276 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46gxc\" (UniqueName: \"kubernetes.io/projected/7d87adfe-3206-4175-8d8f-5a00015cc61e-kube-api-access-46gxc\") on node \"crc\" DevicePath \"\"" Mar 20 15:43:36 crc kubenswrapper[4730]: I0320 15:43:36.446431 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567022-wf5nv" event={"ID":"7d87adfe-3206-4175-8d8f-5a00015cc61e","Type":"ContainerDied","Data":"94b5cc48ba667f00e1531e3aeaa43c807fc5eafadb1a111b034b9b327635ca47"} Mar 20 15:43:36 crc kubenswrapper[4730]: I0320 15:43:36.446483 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94b5cc48ba667f00e1531e3aeaa43c807fc5eafadb1a111b034b9b327635ca47" Mar 20 15:43:36 crc kubenswrapper[4730]: I0320 15:43:36.446502 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567022-wf5nv" Mar 20 15:43:42 crc kubenswrapper[4730]: I0320 15:43:42.481548 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6mppz" event={"ID":"168c4cbd-3a44-48a5-be95-0eb4ea01d6c8","Type":"ContainerStarted","Data":"4d4654a93d90cdb600960802fd1dbb00c64ea5360936651b230f4fce570720a5"} Mar 20 15:43:42 crc kubenswrapper[4730]: I0320 15:43:42.485123 4730 generic.go:334] "Generic (PLEG): container finished" podID="d5addb8e-1dbc-41a2-8330-8a97251bd52f" containerID="027ff3ee79dd3768bc7352d26b5e9a7647079a2f17aa58047546ce0332c5b335" exitCode=0 Mar 20 15:43:42 crc kubenswrapper[4730]: I0320 15:43:42.485169 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mbtfk" event={"ID":"d5addb8e-1dbc-41a2-8330-8a97251bd52f","Type":"ContainerDied","Data":"027ff3ee79dd3768bc7352d26b5e9a7647079a2f17aa58047546ce0332c5b335"} Mar 20 15:43:42 crc kubenswrapper[4730]: I0320 15:43:42.490860 4730 generic.go:334] "Generic (PLEG): container finished" podID="7a118148-49cc-4b61-bb43-44e3ef2c3048" containerID="5b4ec1c83cd975ec260c6743263c1e94e91b48d95b84a27c4a117e322048189c" exitCode=0 Mar 20 15:43:42 crc kubenswrapper[4730]: I0320 15:43:42.490897 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cx74p" event={"ID":"7a118148-49cc-4b61-bb43-44e3ef2c3048","Type":"ContainerDied","Data":"5b4ec1c83cd975ec260c6743263c1e94e91b48d95b84a27c4a117e322048189c"} Mar 20 15:43:42 crc kubenswrapper[4730]: I0320 15:43:42.880132 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 15:43:42 crc kubenswrapper[4730]: I0320 15:43:42.880194 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 15:43:43 crc kubenswrapper[4730]: I0320 15:43:43.499305 4730 generic.go:334] "Generic (PLEG): container finished" podID="5a347883-e4f7-4fcd-8920-59519533cf43" containerID="9a83b1f8dc654ef4f4276c65729d5eabf19cc5bf1944836a69eeb1d195139aba" exitCode=0 Mar 20 15:43:43 crc kubenswrapper[4730]: I0320 15:43:43.499348 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-flpw2" event={"ID":"5a347883-e4f7-4fcd-8920-59519533cf43","Type":"ContainerDied","Data":"9a83b1f8dc654ef4f4276c65729d5eabf19cc5bf1944836a69eeb1d195139aba"} Mar 20 15:43:43 crc kubenswrapper[4730]: I0320 15:43:43.504945 4730 generic.go:334] "Generic (PLEG): container finished" podID="168c4cbd-3a44-48a5-be95-0eb4ea01d6c8" containerID="4d4654a93d90cdb600960802fd1dbb00c64ea5360936651b230f4fce570720a5" exitCode=0 Mar 20 15:43:43 crc kubenswrapper[4730]: I0320 15:43:43.504971 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6mppz" event={"ID":"168c4cbd-3a44-48a5-be95-0eb4ea01d6c8","Type":"ContainerDied","Data":"4d4654a93d90cdb600960802fd1dbb00c64ea5360936651b230f4fce570720a5"} Mar 20 15:43:49 crc kubenswrapper[4730]: I0320 15:43:49.544802 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mbtfk" event={"ID":"d5addb8e-1dbc-41a2-8330-8a97251bd52f","Type":"ContainerStarted","Data":"b0b7cf1aa8683df6582d7fa32a0ee12665587f8843d63db7cc45648643eb352c"} Mar 20 15:43:49 crc kubenswrapper[4730]: I0320 15:43:49.548165 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cx74p" event={"ID":"7a118148-49cc-4b61-bb43-44e3ef2c3048","Type":"ContainerStarted","Data":"70a4d7cafba64be4931c32255724c04bc6838f8411a18d46af296704cb3005d7"} Mar 20 15:43:49 crc kubenswrapper[4730]: I0320 15:43:49.550014 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-flpw2" event={"ID":"5a347883-e4f7-4fcd-8920-59519533cf43","Type":"ContainerStarted","Data":"8a137f491e123e26bfe8e53249675fb8ec9405c7b00ec70ee4673e9a88e5d6bf"} Mar 20 15:43:49 crc kubenswrapper[4730]: I0320 15:43:49.552908 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6mppz" event={"ID":"168c4cbd-3a44-48a5-be95-0eb4ea01d6c8","Type":"ContainerStarted","Data":"ea01bd16d25708ee20b82a86c40239ffa37364d97477418fd97a8527e934e439"} Mar 20 15:43:49 crc kubenswrapper[4730]: I0320 15:43:49.555913 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlnqc" event={"ID":"e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98","Type":"ContainerStarted","Data":"2c3e34ad9ea0b6c3222cf006f08a02a03e69e35e189c65669c0748e767b79f75"} Mar 20 15:43:49 crc kubenswrapper[4730]: I0320 15:43:49.558078 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rptq" event={"ID":"558b00fd-2589-4842-8cba-db0cffe8c826","Type":"ContainerStarted","Data":"0b9feaef40e353d64a848dba5e34276e42725c50bac6122fd4b5265fc07ad6a1"} Mar 20 15:43:49 crc kubenswrapper[4730]: I0320 15:43:49.567553 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmxvf" event={"ID":"ab6c90a0-1bc1-476d-8526-d1fe438163e3","Type":"ContainerStarted","Data":"1d25cea84c8b33aa09a01d2a67ef03e54e2640ab453060480220fbbf97ebde61"} Mar 20 15:43:49 crc kubenswrapper[4730]: I0320 15:43:49.577343 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mbtfk" podStartSLOduration=3.583370045 podStartE2EDuration="1m3.577322051s" podCreationTimestamp="2026-03-20 15:42:46 +0000 UTC" firstStartedPulling="2026-03-20 15:42:48.608899116 +0000 UTC m=+227.822270495" lastFinishedPulling="2026-03-20 15:43:48.602851132 +0000 UTC m=+287.816222501" observedRunningTime="2026-03-20 15:43:49.577235248 +0000 UTC m=+288.790606617" watchObservedRunningTime="2026-03-20 15:43:49.577322051 +0000 UTC m=+288.790693420" Mar 20 15:43:49 crc kubenswrapper[4730]: I0320 15:43:49.584441 4730 generic.go:334] "Generic (PLEG): container finished" podID="715cbff8-9674-4896-8deb-54a6e9a8899e" containerID="33d9320e7f40c8c36e8b7683ba1de97d97d1f3c11216a749d1fb865904354d4e" exitCode=0 Mar 20 15:43:49 crc kubenswrapper[4730]: I0320 15:43:49.584493 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2z2hv" event={"ID":"715cbff8-9674-4896-8deb-54a6e9a8899e","Type":"ContainerDied","Data":"33d9320e7f40c8c36e8b7683ba1de97d97d1f3c11216a749d1fb865904354d4e"} Mar 20 15:43:49 crc kubenswrapper[4730]: I0320 15:43:49.603468 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cx74p" podStartSLOduration=2.439415182 podStartE2EDuration="1m2.603452863s" podCreationTimestamp="2026-03-20 15:42:47 +0000 UTC" firstStartedPulling="2026-03-20 15:42:48.57655372 +0000 UTC m=+227.789925089" lastFinishedPulling="2026-03-20 15:43:48.740591401 +0000 UTC m=+287.953962770" observedRunningTime="2026-03-20 15:43:49.597299184 +0000 UTC m=+288.810670553" watchObservedRunningTime="2026-03-20 15:43:49.603452863 +0000 UTC m=+288.816824232" Mar 20 15:43:49 crc kubenswrapper[4730]: I0320 15:43:49.649883 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6mppz" podStartSLOduration=2.211895153 podStartE2EDuration="1m2.649863038s" podCreationTimestamp="2026-03-20 15:42:47 +0000 UTC" firstStartedPulling="2026-03-20 15:42:48.646603687 +0000 UTC m=+227.859975056" lastFinishedPulling="2026-03-20 15:43:49.084571572 +0000 UTC m=+288.297942941" observedRunningTime="2026-03-20 15:43:49.648991201 +0000 UTC m=+288.862362570" watchObservedRunningTime="2026-03-20 15:43:49.649863038 +0000 UTC m=+288.863234407" Mar 20 15:43:49 crc kubenswrapper[4730]: I0320 15:43:49.753339 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-flpw2" podStartSLOduration=3.563624451 podStartE2EDuration="1m1.753317514s" podCreationTimestamp="2026-03-20 15:42:48 +0000 UTC" firstStartedPulling="2026-03-20 15:42:50.819201556 +0000 UTC m=+230.032572925" lastFinishedPulling="2026-03-20 15:43:49.008894619 +0000 UTC m=+288.222265988" observedRunningTime="2026-03-20 15:43:49.740335696 +0000 UTC m=+288.953707075" watchObservedRunningTime="2026-03-20 15:43:49.753317514 +0000 UTC m=+288.966688903" Mar 20 15:43:50 crc kubenswrapper[4730]: I0320 15:43:50.592380 4730 generic.go:334] "Generic (PLEG): container finished" podID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" containerID="2c3e34ad9ea0b6c3222cf006f08a02a03e69e35e189c65669c0748e767b79f75" exitCode=0 Mar 20 15:43:50 crc kubenswrapper[4730]: I0320 15:43:50.592440 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlnqc" event={"ID":"e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98","Type":"ContainerDied","Data":"2c3e34ad9ea0b6c3222cf006f08a02a03e69e35e189c65669c0748e767b79f75"} Mar 20 15:43:50 crc kubenswrapper[4730]: I0320 15:43:50.596840 4730 generic.go:334] "Generic (PLEG): container finished" podID="558b00fd-2589-4842-8cba-db0cffe8c826" containerID="0b9feaef40e353d64a848dba5e34276e42725c50bac6122fd4b5265fc07ad6a1" exitCode=0 Mar 20 15:43:50 crc kubenswrapper[4730]: I0320 15:43:50.596907 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rptq" event={"ID":"558b00fd-2589-4842-8cba-db0cffe8c826","Type":"ContainerDied","Data":"0b9feaef40e353d64a848dba5e34276e42725c50bac6122fd4b5265fc07ad6a1"} Mar 20 15:43:50 crc kubenswrapper[4730]: I0320 15:43:50.602971 4730 generic.go:334] "Generic (PLEG): container finished" podID="ab6c90a0-1bc1-476d-8526-d1fe438163e3" containerID="1d25cea84c8b33aa09a01d2a67ef03e54e2640ab453060480220fbbf97ebde61" exitCode=0 Mar 20 15:43:50 crc kubenswrapper[4730]: I0320 15:43:50.603007 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmxvf" event={"ID":"ab6c90a0-1bc1-476d-8526-d1fe438163e3","Type":"ContainerDied","Data":"1d25cea84c8b33aa09a01d2a67ef03e54e2640ab453060480220fbbf97ebde61"} Mar 20 15:43:53 crc kubenswrapper[4730]: I0320 15:43:53.632224 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2z2hv" event={"ID":"715cbff8-9674-4896-8deb-54a6e9a8899e","Type":"ContainerStarted","Data":"483eb6bb311253e2717943e6bf3c10d5b83b566c99009c1e72adb9334e3302ee"} Mar 20 15:43:53 crc kubenswrapper[4730]: I0320 15:43:53.649708 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2z2hv" podStartSLOduration=2.890834289 podStartE2EDuration="1m4.649687931s" podCreationTimestamp="2026-03-20 15:42:49 +0000 UTC" firstStartedPulling="2026-03-20 15:42:50.825041296 +0000 UTC m=+230.038412665" lastFinishedPulling="2026-03-20 15:43:52.583894938 +0000 UTC m=+291.797266307" observedRunningTime="2026-03-20 15:43:53.647644508 +0000 UTC m=+292.861015897" watchObservedRunningTime="2026-03-20 15:43:53.649687931 +0000 UTC m=+292.863059300" Mar 20 15:43:55 crc kubenswrapper[4730]: I0320 15:43:55.643517 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmxvf" event={"ID":"ab6c90a0-1bc1-476d-8526-d1fe438163e3","Type":"ContainerStarted","Data":"9a52c050b4986758df8c76456386e12221d78e4ea6fa2b1c10d15807ad001b19"} Mar 20 15:43:56 crc kubenswrapper[4730]: I0320 15:43:56.666607 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qmxvf" podStartSLOduration=3.930535787 podStartE2EDuration="1m6.666587716s" podCreationTimestamp="2026-03-20 15:42:50 +0000 UTC" firstStartedPulling="2026-03-20 15:42:51.926292572 +0000 UTC m=+231.139663931" lastFinishedPulling="2026-03-20 15:43:54.662344491 +0000 UTC m=+293.875715860" observedRunningTime="2026-03-20 15:43:56.664639506 +0000 UTC m=+295.878010885" watchObservedRunningTime="2026-03-20 15:43:56.666587716 +0000 UTC m=+295.879959095" Mar 20 15:43:57 crc kubenswrapper[4730]: I0320 15:43:57.365581 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mbtfk" Mar 20 15:43:57 crc kubenswrapper[4730]: I0320 15:43:57.365643 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mbtfk" Mar 20 15:43:57 crc kubenswrapper[4730]: I0320 15:43:57.497734 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6mppz" Mar 20 15:43:57 crc kubenswrapper[4730]: I0320 15:43:57.497828 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6mppz" Mar 20 15:43:57 crc kubenswrapper[4730]: I0320 15:43:57.656984 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rptq" event={"ID":"558b00fd-2589-4842-8cba-db0cffe8c826","Type":"ContainerStarted","Data":"be8ad8b5a0faeab783b7ccafb3517fc045687f3f5ccf91534d7b0f3ee31c621e"} Mar 20 15:43:57 crc kubenswrapper[4730]: I0320 15:43:57.687592 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8rptq" podStartSLOduration=3.689957659 podStartE2EDuration="1m7.687572191s" podCreationTimestamp="2026-03-20 15:42:50 +0000 UTC" firstStartedPulling="2026-03-20 15:42:51.916345706 +0000 UTC m=+231.129717065" lastFinishedPulling="2026-03-20 15:43:55.913960228 +0000 UTC m=+295.127331597" observedRunningTime="2026-03-20 15:43:57.684357843 +0000 UTC m=+296.897729292" watchObservedRunningTime="2026-03-20 15:43:57.687572191 +0000 UTC m=+296.900943560" Mar 20 15:43:57 crc kubenswrapper[4730]: I0320 15:43:57.730063 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cx74p" Mar 20 15:43:57 crc kubenswrapper[4730]: I0320 15:43:57.730117 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cx74p" Mar 20 15:43:58 crc kubenswrapper[4730]: I0320 15:43:58.066977 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mbtfk" Mar 20 15:43:58 crc kubenswrapper[4730]: I0320 15:43:58.067083 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6mppz" Mar 20 15:43:58 crc kubenswrapper[4730]: I0320 15:43:58.080963 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cx74p" Mar 20 15:43:58 crc kubenswrapper[4730]: I0320 15:43:58.110555 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-st79s"] Mar 20 15:43:58 crc kubenswrapper[4730]: I0320 15:43:58.146967 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6mppz" Mar 20 15:43:58 crc kubenswrapper[4730]: I0320 15:43:58.147525 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mbtfk" Mar 20 15:43:58 crc kubenswrapper[4730]: I0320 15:43:58.738291 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cx74p" Mar 20 15:43:59 crc kubenswrapper[4730]: I0320 15:43:59.106592 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-flpw2" Mar 20 15:43:59 crc kubenswrapper[4730]: I0320 15:43:59.106659 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-flpw2" Mar 20 15:43:59 crc kubenswrapper[4730]: I0320 15:43:59.150770 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-flpw2" Mar 20 15:43:59 crc kubenswrapper[4730]: I0320 15:43:59.603566 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2z2hv" Mar 20 15:43:59 crc kubenswrapper[4730]: I0320 15:43:59.603633 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2z2hv" Mar 20 15:43:59 crc kubenswrapper[4730]: I0320 15:43:59.641897 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2z2hv" Mar 20 15:43:59 crc kubenswrapper[4730]: I0320 15:43:59.696862 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlnqc" event={"ID":"e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98","Type":"ContainerStarted","Data":"3303b366b010494b00cff91f0adf58b15d0be7946981888a990192d9cd69b3fa"} Mar 20 15:43:59 crc kubenswrapper[4730]: I0320 15:43:59.717512 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rlnqc" podStartSLOduration=4.247728806 podStartE2EDuration="1m13.717490345s" podCreationTimestamp="2026-03-20 15:42:46 +0000 UTC" firstStartedPulling="2026-03-20 15:42:48.538620912 +0000 UTC m=+227.751992281" lastFinishedPulling="2026-03-20 15:43:58.008382451 +0000 UTC m=+297.221753820" observedRunningTime="2026-03-20 15:43:59.716291448 +0000 UTC m=+298.929662817" watchObservedRunningTime="2026-03-20 15:43:59.717490345 +0000 UTC m=+298.930861714" Mar 20 15:43:59 crc kubenswrapper[4730]: I0320 15:43:59.745681 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2z2hv" Mar 20 15:43:59 crc kubenswrapper[4730]: I0320 15:43:59.750575 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-flpw2" Mar 20 15:44:00 crc kubenswrapper[4730]: I0320 15:44:00.134234 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567024-s2r9c"] Mar 20 15:44:00 crc kubenswrapper[4730]: E0320 15:44:00.134476 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78bff99a-9296-41fe-ac5d-b41a183e2349" containerName="pruner" Mar 20 15:44:00 crc kubenswrapper[4730]: I0320 15:44:00.134488 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="78bff99a-9296-41fe-ac5d-b41a183e2349" containerName="pruner" Mar 20 15:44:00 crc kubenswrapper[4730]: E0320 15:44:00.134512 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d87adfe-3206-4175-8d8f-5a00015cc61e" containerName="oc" Mar 20 15:44:00 crc kubenswrapper[4730]: I0320 15:44:00.134518 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d87adfe-3206-4175-8d8f-5a00015cc61e" containerName="oc" Mar 20 15:44:00 crc kubenswrapper[4730]: I0320 15:44:00.134610 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="78bff99a-9296-41fe-ac5d-b41a183e2349" containerName="pruner" Mar 20 15:44:00 crc kubenswrapper[4730]: I0320 15:44:00.134626 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d87adfe-3206-4175-8d8f-5a00015cc61e" containerName="oc" Mar 20 15:44:00 crc kubenswrapper[4730]: I0320 15:44:00.134982 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567024-s2r9c" Mar 20 15:44:00 crc kubenswrapper[4730]: I0320 15:44:00.142624 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 15:44:00 crc kubenswrapper[4730]: I0320 15:44:00.142704 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 15:44:00 crc kubenswrapper[4730]: I0320 15:44:00.143704 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 15:44:00 crc kubenswrapper[4730]: I0320 15:44:00.144355 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567024-s2r9c"] Mar 20 15:44:00 crc kubenswrapper[4730]: I0320 15:44:00.292239 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdc6x\" (UniqueName: \"kubernetes.io/projected/3f093381-3bf4-49ff-beb4-f44aa012c521-kube-api-access-kdc6x\") pod \"auto-csr-approver-29567024-s2r9c\" (UID: \"3f093381-3bf4-49ff-beb4-f44aa012c521\") " pod="openshift-infra/auto-csr-approver-29567024-s2r9c" Mar 20 15:44:00 crc kubenswrapper[4730]: I0320 15:44:00.361116 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6mppz"] Mar 20 15:44:00 crc kubenswrapper[4730]: I0320 15:44:00.361353 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6mppz" podUID="168c4cbd-3a44-48a5-be95-0eb4ea01d6c8" containerName="registry-server" containerID="cri-o://ea01bd16d25708ee20b82a86c40239ffa37364d97477418fd97a8527e934e439" gracePeriod=2 Mar 20 15:44:00 crc kubenswrapper[4730]: I0320 15:44:00.393461 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdc6x\" (UniqueName: \"kubernetes.io/projected/3f093381-3bf4-49ff-beb4-f44aa012c521-kube-api-access-kdc6x\") pod \"auto-csr-approver-29567024-s2r9c\" (UID: \"3f093381-3bf4-49ff-beb4-f44aa012c521\") " pod="openshift-infra/auto-csr-approver-29567024-s2r9c" Mar 20 15:44:00 crc kubenswrapper[4730]: I0320 15:44:00.414141 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdc6x\" (UniqueName: \"kubernetes.io/projected/3f093381-3bf4-49ff-beb4-f44aa012c521-kube-api-access-kdc6x\") pod \"auto-csr-approver-29567024-s2r9c\" (UID: \"3f093381-3bf4-49ff-beb4-f44aa012c521\") " pod="openshift-infra/auto-csr-approver-29567024-s2r9c" Mar 20 15:44:00 crc kubenswrapper[4730]: I0320 15:44:00.450415 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567024-s2r9c" Mar 20 15:44:00 crc kubenswrapper[4730]: I0320 15:44:00.495232 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8rptq" Mar 20 15:44:00 crc kubenswrapper[4730]: I0320 15:44:00.495302 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8rptq" Mar 20 15:44:00 crc kubenswrapper[4730]: I0320 15:44:00.904205 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567024-s2r9c"] Mar 20 15:44:00 crc kubenswrapper[4730]: I0320 15:44:00.973048 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qmxvf" Mar 20 15:44:00 crc kubenswrapper[4730]: I0320 15:44:00.973655 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qmxvf" Mar 20 15:44:01 crc kubenswrapper[4730]: I0320 15:44:01.366601 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cx74p"] Mar 20 15:44:01 crc kubenswrapper[4730]: I0320 15:44:01.367056 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cx74p" podUID="7a118148-49cc-4b61-bb43-44e3ef2c3048" containerName="registry-server" containerID="cri-o://70a4d7cafba64be4931c32255724c04bc6838f8411a18d46af296704cb3005d7" gracePeriod=2 Mar 20 15:44:01 crc kubenswrapper[4730]: I0320 15:44:01.550384 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8rptq" podUID="558b00fd-2589-4842-8cba-db0cffe8c826" containerName="registry-server" probeResult="failure" output=< Mar 20 15:44:01 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Mar 20 15:44:01 crc kubenswrapper[4730]: > Mar 20 15:44:01 crc kubenswrapper[4730]: I0320 15:44:01.707057 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567024-s2r9c" event={"ID":"3f093381-3bf4-49ff-beb4-f44aa012c521","Type":"ContainerStarted","Data":"ae9eb78df15bf57e2c0e8bf27ad609713813b7d995be50840b72aa366815e8fc"} Mar 20 15:44:01 crc kubenswrapper[4730]: I0320 15:44:01.710005 4730 generic.go:334] "Generic (PLEG): container finished" podID="168c4cbd-3a44-48a5-be95-0eb4ea01d6c8" containerID="ea01bd16d25708ee20b82a86c40239ffa37364d97477418fd97a8527e934e439" exitCode=0 Mar 20 15:44:01 crc kubenswrapper[4730]: I0320 15:44:01.710213 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6mppz" event={"ID":"168c4cbd-3a44-48a5-be95-0eb4ea01d6c8","Type":"ContainerDied","Data":"ea01bd16d25708ee20b82a86c40239ffa37364d97477418fd97a8527e934e439"} Mar 20 15:44:02 crc kubenswrapper[4730]: I0320 15:44:02.013463 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qmxvf" podUID="ab6c90a0-1bc1-476d-8526-d1fe438163e3" containerName="registry-server" probeResult="failure" output=< Mar 20 15:44:02 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Mar 20 15:44:02 crc kubenswrapper[4730]: > Mar 20 15:44:02 crc kubenswrapper[4730]: I0320 15:44:02.659750 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6mppz" Mar 20 15:44:02 crc kubenswrapper[4730]: I0320 15:44:02.718229 4730 generic.go:334] "Generic (PLEG): container finished" podID="7a118148-49cc-4b61-bb43-44e3ef2c3048" containerID="70a4d7cafba64be4931c32255724c04bc6838f8411a18d46af296704cb3005d7" exitCode=0 Mar 20 15:44:02 crc kubenswrapper[4730]: I0320 15:44:02.718285 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cx74p" event={"ID":"7a118148-49cc-4b61-bb43-44e3ef2c3048","Type":"ContainerDied","Data":"70a4d7cafba64be4931c32255724c04bc6838f8411a18d46af296704cb3005d7"} Mar 20 15:44:02 crc kubenswrapper[4730]: I0320 15:44:02.722510 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6mppz" event={"ID":"168c4cbd-3a44-48a5-be95-0eb4ea01d6c8","Type":"ContainerDied","Data":"b24679ab1f6c7ce28e2ed00c17a4988d013e4500b53404671b46ef5509b85dc8"} Mar 20 15:44:02 crc kubenswrapper[4730]: I0320 15:44:02.722565 4730 scope.go:117] "RemoveContainer" containerID="ea01bd16d25708ee20b82a86c40239ffa37364d97477418fd97a8527e934e439" Mar 20 15:44:02 crc kubenswrapper[4730]: I0320 15:44:02.722618 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6mppz" Mar 20 15:44:02 crc kubenswrapper[4730]: I0320 15:44:02.723969 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwqvn\" (UniqueName: \"kubernetes.io/projected/168c4cbd-3a44-48a5-be95-0eb4ea01d6c8-kube-api-access-vwqvn\") pod \"168c4cbd-3a44-48a5-be95-0eb4ea01d6c8\" (UID: \"168c4cbd-3a44-48a5-be95-0eb4ea01d6c8\") " Mar 20 15:44:02 crc kubenswrapper[4730]: I0320 15:44:02.724039 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/168c4cbd-3a44-48a5-be95-0eb4ea01d6c8-utilities\") pod \"168c4cbd-3a44-48a5-be95-0eb4ea01d6c8\" (UID: \"168c4cbd-3a44-48a5-be95-0eb4ea01d6c8\") " Mar 20 15:44:02 crc kubenswrapper[4730]: I0320 15:44:02.724060 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/168c4cbd-3a44-48a5-be95-0eb4ea01d6c8-catalog-content\") pod \"168c4cbd-3a44-48a5-be95-0eb4ea01d6c8\" (UID: \"168c4cbd-3a44-48a5-be95-0eb4ea01d6c8\") " Mar 20 15:44:02 crc kubenswrapper[4730]: I0320 15:44:02.727670 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/168c4cbd-3a44-48a5-be95-0eb4ea01d6c8-utilities" (OuterVolumeSpecName: "utilities") pod "168c4cbd-3a44-48a5-be95-0eb4ea01d6c8" (UID: "168c4cbd-3a44-48a5-be95-0eb4ea01d6c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:44:02 crc kubenswrapper[4730]: I0320 15:44:02.731421 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/168c4cbd-3a44-48a5-be95-0eb4ea01d6c8-kube-api-access-vwqvn" (OuterVolumeSpecName: "kube-api-access-vwqvn") pod "168c4cbd-3a44-48a5-be95-0eb4ea01d6c8" (UID: "168c4cbd-3a44-48a5-be95-0eb4ea01d6c8"). InnerVolumeSpecName "kube-api-access-vwqvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:44:02 crc kubenswrapper[4730]: I0320 15:44:02.737667 4730 scope.go:117] "RemoveContainer" containerID="4d4654a93d90cdb600960802fd1dbb00c64ea5360936651b230f4fce570720a5" Mar 20 15:44:02 crc kubenswrapper[4730]: I0320 15:44:02.755913 4730 scope.go:117] "RemoveContainer" containerID="f14d75be5024f0d9fd4c3cf59a10c4fbb452ecc1d6a3188f6fd40ab5bbb8ffe9" Mar 20 15:44:02 crc kubenswrapper[4730]: I0320 15:44:02.766122 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2z2hv"] Mar 20 15:44:02 crc kubenswrapper[4730]: I0320 15:44:02.766369 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2z2hv" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" containerName="registry-server" containerID="cri-o://483eb6bb311253e2717943e6bf3c10d5b83b566c99009c1e72adb9334e3302ee" gracePeriod=2 Mar 20 15:44:02 crc kubenswrapper[4730]: I0320 15:44:02.787479 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/168c4cbd-3a44-48a5-be95-0eb4ea01d6c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "168c4cbd-3a44-48a5-be95-0eb4ea01d6c8" (UID: "168c4cbd-3a44-48a5-be95-0eb4ea01d6c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:44:02 crc kubenswrapper[4730]: I0320 15:44:02.824941 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwqvn\" (UniqueName: \"kubernetes.io/projected/168c4cbd-3a44-48a5-be95-0eb4ea01d6c8-kube-api-access-vwqvn\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:02 crc kubenswrapper[4730]: I0320 15:44:02.824972 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/168c4cbd-3a44-48a5-be95-0eb4ea01d6c8-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:02 crc kubenswrapper[4730]: I0320 15:44:02.824981 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/168c4cbd-3a44-48a5-be95-0eb4ea01d6c8-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:03 crc kubenswrapper[4730]: I0320 15:44:03.729233 4730 generic.go:334] "Generic (PLEG): container finished" podID="715cbff8-9674-4896-8deb-54a6e9a8899e" containerID="483eb6bb311253e2717943e6bf3c10d5b83b566c99009c1e72adb9334e3302ee" exitCode=0 Mar 20 15:44:03 crc kubenswrapper[4730]: I0320 15:44:03.730341 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2z2hv" event={"ID":"715cbff8-9674-4896-8deb-54a6e9a8899e","Type":"ContainerDied","Data":"483eb6bb311253e2717943e6bf3c10d5b83b566c99009c1e72adb9334e3302ee"} Mar 20 15:44:03 crc kubenswrapper[4730]: I0320 15:44:03.758632 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6mppz"] Mar 20 15:44:03 crc kubenswrapper[4730]: I0320 15:44:03.763628 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6mppz"] Mar 20 15:44:03 crc kubenswrapper[4730]: E0320 15:44:03.774262 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod168c4cbd_3a44_48a5_be95_0eb4ea01d6c8.slice/crio-b24679ab1f6c7ce28e2ed00c17a4988d013e4500b53404671b46ef5509b85dc8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod168c4cbd_3a44_48a5_be95_0eb4ea01d6c8.slice\": RecentStats: unable to find data in memory cache]" Mar 20 15:44:03 crc kubenswrapper[4730]: I0320 15:44:03.909100 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cx74p" Mar 20 15:44:04 crc kubenswrapper[4730]: I0320 15:44:04.043182 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5sxl\" (UniqueName: \"kubernetes.io/projected/7a118148-49cc-4b61-bb43-44e3ef2c3048-kube-api-access-v5sxl\") pod \"7a118148-49cc-4b61-bb43-44e3ef2c3048\" (UID: \"7a118148-49cc-4b61-bb43-44e3ef2c3048\") " Mar 20 15:44:04 crc kubenswrapper[4730]: I0320 15:44:04.043306 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a118148-49cc-4b61-bb43-44e3ef2c3048-catalog-content\") pod \"7a118148-49cc-4b61-bb43-44e3ef2c3048\" (UID: \"7a118148-49cc-4b61-bb43-44e3ef2c3048\") " Mar 20 15:44:04 crc kubenswrapper[4730]: I0320 15:44:04.043350 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a118148-49cc-4b61-bb43-44e3ef2c3048-utilities\") pod \"7a118148-49cc-4b61-bb43-44e3ef2c3048\" (UID: \"7a118148-49cc-4b61-bb43-44e3ef2c3048\") " Mar 20 15:44:04 crc kubenswrapper[4730]: I0320 15:44:04.044372 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a118148-49cc-4b61-bb43-44e3ef2c3048-utilities" (OuterVolumeSpecName: "utilities") pod "7a118148-49cc-4b61-bb43-44e3ef2c3048" (UID: "7a118148-49cc-4b61-bb43-44e3ef2c3048"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:44:04 crc kubenswrapper[4730]: I0320 15:44:04.069239 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a118148-49cc-4b61-bb43-44e3ef2c3048-kube-api-access-v5sxl" (OuterVolumeSpecName: "kube-api-access-v5sxl") pod "7a118148-49cc-4b61-bb43-44e3ef2c3048" (UID: "7a118148-49cc-4b61-bb43-44e3ef2c3048"). InnerVolumeSpecName "kube-api-access-v5sxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:44:04 crc kubenswrapper[4730]: I0320 15:44:04.096264 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a118148-49cc-4b61-bb43-44e3ef2c3048-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a118148-49cc-4b61-bb43-44e3ef2c3048" (UID: "7a118148-49cc-4b61-bb43-44e3ef2c3048"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:44:04 crc kubenswrapper[4730]: I0320 15:44:04.145101 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5sxl\" (UniqueName: \"kubernetes.io/projected/7a118148-49cc-4b61-bb43-44e3ef2c3048-kube-api-access-v5sxl\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:04 crc kubenswrapper[4730]: I0320 15:44:04.145164 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a118148-49cc-4b61-bb43-44e3ef2c3048-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:04 crc kubenswrapper[4730]: I0320 15:44:04.145177 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a118148-49cc-4b61-bb43-44e3ef2c3048-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:04 crc kubenswrapper[4730]: I0320 15:44:04.339092 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f56868448-2fbxh"] Mar 20 15:44:04 crc kubenswrapper[4730]: I0320 15:44:04.339335 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" podUID="9d747680-5dde-4793-863a-252a5f67233a" containerName="controller-manager" containerID="cri-o://6a969344117b0e223a95c25576df0115301c9833e8c4f08723a3caa581f16b8b" gracePeriod=30 Mar 20 15:44:04 crc kubenswrapper[4730]: I0320 15:44:04.437066 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2"] Mar 20 15:44:04 crc kubenswrapper[4730]: I0320 15:44:04.437661 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" podUID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" containerName="route-controller-manager" containerID="cri-o://34175b3ad56804c8ddffbef2e3fafd18e67e87b94833ec1a23054a68de4fe0be" gracePeriod=30 Mar 20 15:44:04 crc kubenswrapper[4730]: I0320 15:44:04.737954 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cx74p" event={"ID":"7a118148-49cc-4b61-bb43-44e3ef2c3048","Type":"ContainerDied","Data":"189336796f3983b3d071ba1d85e4dc4b864a1692cfe884a4465fbc035616d986"} Mar 20 15:44:04 crc kubenswrapper[4730]: I0320 15:44:04.738015 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cx74p" Mar 20 15:44:04 crc kubenswrapper[4730]: I0320 15:44:04.738036 4730 scope.go:117] "RemoveContainer" containerID="70a4d7cafba64be4931c32255724c04bc6838f8411a18d46af296704cb3005d7" Mar 20 15:44:04 crc kubenswrapper[4730]: I0320 15:44:04.752218 4730 scope.go:117] "RemoveContainer" containerID="5b4ec1c83cd975ec260c6743263c1e94e91b48d95b84a27c4a117e322048189c" Mar 20 15:44:04 crc kubenswrapper[4730]: I0320 15:44:04.763203 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cx74p"] Mar 20 15:44:04 crc kubenswrapper[4730]: I0320 15:44:04.769791 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cx74p"] Mar 20 15:44:04 crc kubenswrapper[4730]: I0320 15:44:04.788545 4730 scope.go:117] "RemoveContainer" containerID="18e38e92f40a21b89290233a7ffe301a018e759b353ad6883c83ed52a1e47762" Mar 20 15:44:05 crc kubenswrapper[4730]: I0320 15:44:05.540529 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="168c4cbd-3a44-48a5-be95-0eb4ea01d6c8" path="/var/lib/kubelet/pods/168c4cbd-3a44-48a5-be95-0eb4ea01d6c8/volumes" Mar 20 15:44:05 crc kubenswrapper[4730]: I0320 15:44:05.541227 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a118148-49cc-4b61-bb43-44e3ef2c3048" path="/var/lib/kubelet/pods/7a118148-49cc-4b61-bb43-44e3ef2c3048/volumes" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.317811 4730 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 20 15:44:06 crc kubenswrapper[4730]: E0320 15:44:06.318078 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a118148-49cc-4b61-bb43-44e3ef2c3048" containerName="extract-utilities" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.318090 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a118148-49cc-4b61-bb43-44e3ef2c3048" containerName="extract-utilities" Mar 20 15:44:06 crc kubenswrapper[4730]: E0320 15:44:06.318099 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="168c4cbd-3a44-48a5-be95-0eb4ea01d6c8" containerName="registry-server" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.318105 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="168c4cbd-3a44-48a5-be95-0eb4ea01d6c8" containerName="registry-server" Mar 20 15:44:06 crc kubenswrapper[4730]: E0320 15:44:06.318117 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a118148-49cc-4b61-bb43-44e3ef2c3048" containerName="extract-content" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.318122 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a118148-49cc-4b61-bb43-44e3ef2c3048" containerName="extract-content" Mar 20 15:44:06 crc kubenswrapper[4730]: E0320 15:44:06.318133 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="168c4cbd-3a44-48a5-be95-0eb4ea01d6c8" containerName="extract-content" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.318138 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="168c4cbd-3a44-48a5-be95-0eb4ea01d6c8" containerName="extract-content" Mar 20 15:44:06 crc kubenswrapper[4730]: E0320 15:44:06.318148 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="168c4cbd-3a44-48a5-be95-0eb4ea01d6c8" containerName="extract-utilities" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.318154 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="168c4cbd-3a44-48a5-be95-0eb4ea01d6c8" containerName="extract-utilities" Mar 20 15:44:06 crc kubenswrapper[4730]: E0320 15:44:06.318161 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a118148-49cc-4b61-bb43-44e3ef2c3048" containerName="registry-server" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.318170 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a118148-49cc-4b61-bb43-44e3ef2c3048" containerName="registry-server" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.318283 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a118148-49cc-4b61-bb43-44e3ef2c3048" containerName="registry-server" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.318302 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="168c4cbd-3a44-48a5-be95-0eb4ea01d6c8" containerName="registry-server" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.318699 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.319934 4730 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.320103 4730 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 20 15:44:06 crc kubenswrapper[4730]: E0320 15:44:06.320555 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.320608 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad" gracePeriod=15 Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.320688 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b" gracePeriod=15 Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.320777 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf" gracePeriod=15 Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.320594 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4" gracePeriod=15 Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.321072 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 15:44:06 crc kubenswrapper[4730]: E0320 15:44:06.321097 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.321108 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 20 15:44:06 crc kubenswrapper[4730]: E0320 15:44:06.321124 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.321134 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 20 15:44:06 crc kubenswrapper[4730]: E0320 15:44:06.321146 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.321155 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 15:44:06 crc kubenswrapper[4730]: E0320 15:44:06.321170 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.321179 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 20 15:44:06 crc kubenswrapper[4730]: E0320 15:44:06.321195 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.321204 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 15:44:06 crc kubenswrapper[4730]: E0320 15:44:06.321215 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.321223 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 20 15:44:06 crc kubenswrapper[4730]: E0320 15:44:06.321237 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.321268 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 15:44:06 crc kubenswrapper[4730]: E0320 15:44:06.321281 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.321290 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.321460 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.321476 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.321486 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.321500 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.321513 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.321523 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.321537 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 20 15:44:06 crc kubenswrapper[4730]: E0320 15:44:06.321673 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.321685 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.321824 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.321838 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.320720 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007" gracePeriod=15 Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.375307 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.477921 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.478015 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.478042 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.478065 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.478093 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.478120 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.478167 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.478192 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.543083 4730 patch_prober.go:28] interesting pod/controller-manager-6f56868448-2fbxh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.543227 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" podUID="9d747680-5dde-4793-863a-252a5f67233a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Mar 20 15:44:06 crc kubenswrapper[4730]: E0320 15:44:06.544206 4730 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/events\": dial tcp 38.102.83.162:6443: connect: connection refused" event=< Mar 20 15:44:06 crc kubenswrapper[4730]: &Event{ObjectMeta:{controller-manager-6f56868448-2fbxh.189e971b4120d4fc openshift-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager,Name:controller-manager-6f56868448-2fbxh,UID:9d747680-5dde-4793-863a-252a5f67233a,APIVersion:v1,ResourceVersion:29368,FieldPath:spec.containers{controller-manager},},Reason:ProbeError,Message:Readiness probe error: Get "https://10.217.0.61:8443/healthz": dial tcp 10.217.0.61:8443: connect: connection refused Mar 20 15:44:06 crc kubenswrapper[4730]: body: Mar 20 15:44:06 crc kubenswrapper[4730]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:44:06.543135996 +0000 UTC m=+305.756507365,LastTimestamp:2026-03-20 15:44:06.543135996 +0000 UTC m=+305.756507365,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 15:44:06 crc kubenswrapper[4730]: > Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.579582 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.579686 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.579680 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.579729 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.579782 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.579798 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.579826 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.579866 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.579881 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.579902 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.579916 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.580030 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.580037 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.580061 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.580086 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.579980 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.663834 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.677860 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2z2hv" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.678432 4730 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.678698 4730 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.679162 4730 status_manager.go:851] "Failed to get status for pod" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" pod="openshift-marketplace/redhat-marketplace-2z2hv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2z2hv\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:06 crc kubenswrapper[4730]: W0320 15:44:06.683824 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-c34ac7228db74b5c2b59324dec36183935690e57ab3c48f33b950fb7fee99773 WatchSource:0}: Error finding container c34ac7228db74b5c2b59324dec36183935690e57ab3c48f33b950fb7fee99773: Status 404 returned error can't find the container with id c34ac7228db74b5c2b59324dec36183935690e57ab3c48f33b950fb7fee99773 Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.751424 4730 generic.go:334] "Generic (PLEG): container finished" podID="9d747680-5dde-4793-863a-252a5f67233a" containerID="6a969344117b0e223a95c25576df0115301c9833e8c4f08723a3caa581f16b8b" exitCode=0 Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.751501 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" event={"ID":"9d747680-5dde-4793-863a-252a5f67233a","Type":"ContainerDied","Data":"6a969344117b0e223a95c25576df0115301c9833e8c4f08723a3caa581f16b8b"} Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.752882 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c34ac7228db74b5c2b59324dec36183935690e57ab3c48f33b950fb7fee99773"} Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.755285 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.756371 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.757356 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007" exitCode=2 Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.759283 4730 generic.go:334] "Generic (PLEG): container finished" podID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" containerID="34175b3ad56804c8ddffbef2e3fafd18e67e87b94833ec1a23054a68de4fe0be" exitCode=0 Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.759312 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" event={"ID":"d4d20fab-86cc-44d8-a8b9-c60f6835c5e0","Type":"ContainerDied","Data":"34175b3ad56804c8ddffbef2e3fafd18e67e87b94833ec1a23054a68de4fe0be"} Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.761735 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2z2hv" event={"ID":"715cbff8-9674-4896-8deb-54a6e9a8899e","Type":"ContainerDied","Data":"8cd9a9489d9a0dbe8438d77747b3f3bcf8afc79d1dcd6dcfc2035db6222041b2"} Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.761775 4730 scope.go:117] "RemoveContainer" containerID="483eb6bb311253e2717943e6bf3c10d5b83b566c99009c1e72adb9334e3302ee" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.761797 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2z2hv" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.762468 4730 status_manager.go:851] "Failed to get status for pod" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" pod="openshift-marketplace/redhat-marketplace-2z2hv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2z2hv\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.762948 4730 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.763497 4730 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.780743 4730 scope.go:117] "RemoveContainer" containerID="33d9320e7f40c8c36e8b7683ba1de97d97d1f3c11216a749d1fb865904354d4e" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.781420 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/715cbff8-9674-4896-8deb-54a6e9a8899e-catalog-content\") pod \"715cbff8-9674-4896-8deb-54a6e9a8899e\" (UID: \"715cbff8-9674-4896-8deb-54a6e9a8899e\") " Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.781496 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/715cbff8-9674-4896-8deb-54a6e9a8899e-utilities\") pod \"715cbff8-9674-4896-8deb-54a6e9a8899e\" (UID: \"715cbff8-9674-4896-8deb-54a6e9a8899e\") " Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.781541 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmf5w\" (UniqueName: \"kubernetes.io/projected/715cbff8-9674-4896-8deb-54a6e9a8899e-kube-api-access-pmf5w\") pod \"715cbff8-9674-4896-8deb-54a6e9a8899e\" (UID: \"715cbff8-9674-4896-8deb-54a6e9a8899e\") " Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.782496 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/715cbff8-9674-4896-8deb-54a6e9a8899e-utilities" (OuterVolumeSpecName: "utilities") pod "715cbff8-9674-4896-8deb-54a6e9a8899e" (UID: "715cbff8-9674-4896-8deb-54a6e9a8899e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.785491 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/715cbff8-9674-4896-8deb-54a6e9a8899e-kube-api-access-pmf5w" (OuterVolumeSpecName: "kube-api-access-pmf5w") pod "715cbff8-9674-4896-8deb-54a6e9a8899e" (UID: "715cbff8-9674-4896-8deb-54a6e9a8899e"). InnerVolumeSpecName "kube-api-access-pmf5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.796361 4730 scope.go:117] "RemoveContainer" containerID="3d0d0e86eafcfd3f1d67f4c8dfdc39b982cb1033b59bc5dcda037270619199e3" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.809749 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/715cbff8-9674-4896-8deb-54a6e9a8899e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "715cbff8-9674-4896-8deb-54a6e9a8899e" (UID: "715cbff8-9674-4896-8deb-54a6e9a8899e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.883382 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/715cbff8-9674-4896-8deb-54a6e9a8899e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.883427 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/715cbff8-9674-4896-8deb-54a6e9a8899e-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:06 crc kubenswrapper[4730]: I0320 15:44:06.883437 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmf5w\" (UniqueName: \"kubernetes.io/projected/715cbff8-9674-4896-8deb-54a6e9a8899e-kube-api-access-pmf5w\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.086675 4730 status_manager.go:851] "Failed to get status for pod" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" pod="openshift-marketplace/redhat-marketplace-2z2hv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2z2hv\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.087383 4730 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.087854 4730 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.153974 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rlnqc" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.154026 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rlnqc" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.198050 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rlnqc" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.198688 4730 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.199161 4730 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.199446 4730 status_manager.go:851] "Failed to get status for pod" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" pod="openshift-marketplace/certified-operators-rlnqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rlnqc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.199669 4730 status_manager.go:851] "Failed to get status for pod" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" pod="openshift-marketplace/redhat-marketplace-2z2hv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2z2hv\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.488008 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.489509 4730 status_manager.go:851] "Failed to get status for pod" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" pod="openshift-marketplace/redhat-marketplace-2z2hv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2z2hv\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.489762 4730 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.489943 4730 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.490172 4730 status_manager.go:851] "Failed to get status for pod" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" pod="openshift-marketplace/certified-operators-rlnqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rlnqc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.490456 4730 status_manager.go:851] "Failed to get status for pod" podUID="9d747680-5dde-4793-863a-252a5f67233a" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f56868448-2fbxh\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.492433 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.492790 4730 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.493218 4730 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.493562 4730 status_manager.go:851] "Failed to get status for pod" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" pod="openshift-marketplace/certified-operators-rlnqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rlnqc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.493847 4730 status_manager.go:851] "Failed to get status for pod" podUID="9d747680-5dde-4793-863a-252a5f67233a" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f56868448-2fbxh\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.494080 4730 status_manager.go:851] "Failed to get status for pod" podUID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-b75b5f765-8wjw2\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.494354 4730 status_manager.go:851] "Failed to get status for pod" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" pod="openshift-marketplace/redhat-marketplace-2z2hv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2z2hv\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.593661 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d747680-5dde-4793-863a-252a5f67233a-proxy-ca-bundles\") pod \"9d747680-5dde-4793-863a-252a5f67233a\" (UID: \"9d747680-5dde-4793-863a-252a5f67233a\") " Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.593751 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d747680-5dde-4793-863a-252a5f67233a-config\") pod \"9d747680-5dde-4793-863a-252a5f67233a\" (UID: \"9d747680-5dde-4793-863a-252a5f67233a\") " Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.593774 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d747680-5dde-4793-863a-252a5f67233a-client-ca\") pod \"9d747680-5dde-4793-863a-252a5f67233a\" (UID: \"9d747680-5dde-4793-863a-252a5f67233a\") " Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.593807 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d747680-5dde-4793-863a-252a5f67233a-serving-cert\") pod \"9d747680-5dde-4793-863a-252a5f67233a\" (UID: \"9d747680-5dde-4793-863a-252a5f67233a\") " Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.593857 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68xrx\" (UniqueName: \"kubernetes.io/projected/9d747680-5dde-4793-863a-252a5f67233a-kube-api-access-68xrx\") pod \"9d747680-5dde-4793-863a-252a5f67233a\" (UID: \"9d747680-5dde-4793-863a-252a5f67233a\") " Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.594807 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d747680-5dde-4793-863a-252a5f67233a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9d747680-5dde-4793-863a-252a5f67233a" (UID: "9d747680-5dde-4793-863a-252a5f67233a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.594854 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d747680-5dde-4793-863a-252a5f67233a-client-ca" (OuterVolumeSpecName: "client-ca") pod "9d747680-5dde-4793-863a-252a5f67233a" (UID: "9d747680-5dde-4793-863a-252a5f67233a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.595117 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d747680-5dde-4793-863a-252a5f67233a-config" (OuterVolumeSpecName: "config") pod "9d747680-5dde-4793-863a-252a5f67233a" (UID: "9d747680-5dde-4793-863a-252a5f67233a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.599047 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d747680-5dde-4793-863a-252a5f67233a-kube-api-access-68xrx" (OuterVolumeSpecName: "kube-api-access-68xrx") pod "9d747680-5dde-4793-863a-252a5f67233a" (UID: "9d747680-5dde-4793-863a-252a5f67233a"). InnerVolumeSpecName "kube-api-access-68xrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.599469 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d747680-5dde-4793-863a-252a5f67233a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d747680-5dde-4793-863a-252a5f67233a" (UID: "9d747680-5dde-4793-863a-252a5f67233a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.694969 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d20fab-86cc-44d8-a8b9-c60f6835c5e0-config\") pod \"d4d20fab-86cc-44d8-a8b9-c60f6835c5e0\" (UID: \"d4d20fab-86cc-44d8-a8b9-c60f6835c5e0\") " Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.695129 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d20fab-86cc-44d8-a8b9-c60f6835c5e0-serving-cert\") pod \"d4d20fab-86cc-44d8-a8b9-c60f6835c5e0\" (UID: \"d4d20fab-86cc-44d8-a8b9-c60f6835c5e0\") " Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.695170 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xdnv\" (UniqueName: \"kubernetes.io/projected/d4d20fab-86cc-44d8-a8b9-c60f6835c5e0-kube-api-access-4xdnv\") pod \"d4d20fab-86cc-44d8-a8b9-c60f6835c5e0\" (UID: \"d4d20fab-86cc-44d8-a8b9-c60f6835c5e0\") " Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.695320 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4d20fab-86cc-44d8-a8b9-c60f6835c5e0-client-ca\") pod \"d4d20fab-86cc-44d8-a8b9-c60f6835c5e0\" (UID: \"d4d20fab-86cc-44d8-a8b9-c60f6835c5e0\") " Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.695776 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68xrx\" (UniqueName: \"kubernetes.io/projected/9d747680-5dde-4793-863a-252a5f67233a-kube-api-access-68xrx\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.695817 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d747680-5dde-4793-863a-252a5f67233a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.695838 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d747680-5dde-4793-863a-252a5f67233a-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.695856 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d747680-5dde-4793-863a-252a5f67233a-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.695876 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d747680-5dde-4793-863a-252a5f67233a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.696139 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4d20fab-86cc-44d8-a8b9-c60f6835c5e0-client-ca" (OuterVolumeSpecName: "client-ca") pod "d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" (UID: "d4d20fab-86cc-44d8-a8b9-c60f6835c5e0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.696178 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4d20fab-86cc-44d8-a8b9-c60f6835c5e0-config" (OuterVolumeSpecName: "config") pod "d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" (UID: "d4d20fab-86cc-44d8-a8b9-c60f6835c5e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.698023 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4d20fab-86cc-44d8-a8b9-c60f6835c5e0-kube-api-access-4xdnv" (OuterVolumeSpecName: "kube-api-access-4xdnv") pod "d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" (UID: "d4d20fab-86cc-44d8-a8b9-c60f6835c5e0"). InnerVolumeSpecName "kube-api-access-4xdnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:44:07 crc kubenswrapper[4730]: E0320 15:44:07.698468 4730 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/events\": dial tcp 38.102.83.162:6443: connect: connection refused" event=< Mar 20 15:44:07 crc kubenswrapper[4730]: &Event{ObjectMeta:{controller-manager-6f56868448-2fbxh.189e971b4120d4fc openshift-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager,Name:controller-manager-6f56868448-2fbxh,UID:9d747680-5dde-4793-863a-252a5f67233a,APIVersion:v1,ResourceVersion:29368,FieldPath:spec.containers{controller-manager},},Reason:ProbeError,Message:Readiness probe error: Get "https://10.217.0.61:8443/healthz": dial tcp 10.217.0.61:8443: connect: connection refused Mar 20 15:44:07 crc kubenswrapper[4730]: body: Mar 20 15:44:07 crc kubenswrapper[4730]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:44:06.543135996 +0000 UTC m=+305.756507365,LastTimestamp:2026-03-20 15:44:06.543135996 +0000 UTC m=+305.756507365,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 15:44:07 crc kubenswrapper[4730]: > Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.699792 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4d20fab-86cc-44d8-a8b9-c60f6835c5e0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" (UID: "d4d20fab-86cc-44d8-a8b9-c60f6835c5e0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.769806 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"2bd02e0e36575d63682737a4f6e6a51da85e3c01b703e67cc6238582df76514f"} Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.770591 4730 status_manager.go:851] "Failed to get status for pod" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" pod="openshift-marketplace/certified-operators-rlnqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rlnqc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.771061 4730 status_manager.go:851] "Failed to get status for pod" podUID="9d747680-5dde-4793-863a-252a5f67233a" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f56868448-2fbxh\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.771439 4730 status_manager.go:851] "Failed to get status for pod" podUID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-b75b5f765-8wjw2\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.771756 4730 status_manager.go:851] "Failed to get status for pod" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" pod="openshift-marketplace/redhat-marketplace-2z2hv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2z2hv\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.771975 4730 generic.go:334] "Generic (PLEG): container finished" podID="e48519c7-0cdc-419b-bd72-2bab0e911af8" containerID="d1983b19ac38ba32d4fa20a02bf50a7e57dd7a9e5c61bb3d5cfddbb58ce8788c" exitCode=0 Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.772020 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e48519c7-0cdc-419b-bd72-2bab0e911af8","Type":"ContainerDied","Data":"d1983b19ac38ba32d4fa20a02bf50a7e57dd7a9e5c61bb3d5cfddbb58ce8788c"} Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.772125 4730 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.772576 4730 status_manager.go:851] "Failed to get status for pod" podUID="e48519c7-0cdc-419b-bd72-2bab0e911af8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.772887 4730 status_manager.go:851] "Failed to get status for pod" podUID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-b75b5f765-8wjw2\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.773075 4730 status_manager.go:851] "Failed to get status for pod" podUID="9d747680-5dde-4793-863a-252a5f67233a" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f56868448-2fbxh\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.773312 4730 status_manager.go:851] "Failed to get status for pod" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" pod="openshift-marketplace/redhat-marketplace-2z2hv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2z2hv\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.773607 4730 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.773989 4730 status_manager.go:851] "Failed to get status for pod" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" pod="openshift-marketplace/certified-operators-rlnqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rlnqc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.774870 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.776072 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.777286 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad" exitCode=0 Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.777319 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf" exitCode=0 Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.777331 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b" exitCode=0 Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.777409 4730 scope.go:117] "RemoveContainer" containerID="688754d61ab497303640174e7b0a3aab82b31f77e5cc50b6993f85222e3053c5" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.779182 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" event={"ID":"d4d20fab-86cc-44d8-a8b9-c60f6835c5e0","Type":"ContainerDied","Data":"4e99a4704ba10afb385846d96e0db2ec14bcce1391fc5cd9c5fede455f436bf0"} Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.779201 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.779894 4730 status_manager.go:851] "Failed to get status for pod" podUID="e48519c7-0cdc-419b-bd72-2bab0e911af8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.780108 4730 status_manager.go:851] "Failed to get status for pod" podUID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-b75b5f765-8wjw2\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.780406 4730 status_manager.go:851] "Failed to get status for pod" podUID="9d747680-5dde-4793-863a-252a5f67233a" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f56868448-2fbxh\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.780799 4730 status_manager.go:851] "Failed to get status for pod" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" pod="openshift-marketplace/redhat-marketplace-2z2hv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2z2hv\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.781149 4730 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.781443 4730 status_manager.go:851] "Failed to get status for pod" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" pod="openshift-marketplace/certified-operators-rlnqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rlnqc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.782235 4730 generic.go:334] "Generic (PLEG): container finished" podID="3f093381-3bf4-49ff-beb4-f44aa012c521" containerID="6ba1acd4b6440038c4d2f11f36de1734bab2b24cdd1e2d4018cd0e97b421d598" exitCode=0 Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.782349 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567024-s2r9c" event={"ID":"3f093381-3bf4-49ff-beb4-f44aa012c521","Type":"ContainerDied","Data":"6ba1acd4b6440038c4d2f11f36de1734bab2b24cdd1e2d4018cd0e97b421d598"} Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.782807 4730 status_manager.go:851] "Failed to get status for pod" podUID="3f093381-3bf4-49ff-beb4-f44aa012c521" pod="openshift-infra/auto-csr-approver-29567024-s2r9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29567024-s2r9c\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.783350 4730 status_manager.go:851] "Failed to get status for pod" podUID="e48519c7-0cdc-419b-bd72-2bab0e911af8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.783640 4730 status_manager.go:851] "Failed to get status for pod" podUID="9d747680-5dde-4793-863a-252a5f67233a" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f56868448-2fbxh\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.783896 4730 status_manager.go:851] "Failed to get status for pod" podUID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-b75b5f765-8wjw2\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.784188 4730 status_manager.go:851] "Failed to get status for pod" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" pod="openshift-marketplace/redhat-marketplace-2z2hv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2z2hv\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.784368 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" event={"ID":"9d747680-5dde-4793-863a-252a5f67233a","Type":"ContainerDied","Data":"2d0d6a6d61d99c2e38791ba9cd9580b05d3d5ca5c004c4c372aff09d24128220"} Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.784400 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.784500 4730 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.784759 4730 status_manager.go:851] "Failed to get status for pod" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" pod="openshift-marketplace/certified-operators-rlnqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rlnqc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.785275 4730 status_manager.go:851] "Failed to get status for pod" podUID="3f093381-3bf4-49ff-beb4-f44aa012c521" pod="openshift-infra/auto-csr-approver-29567024-s2r9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29567024-s2r9c\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.785542 4730 status_manager.go:851] "Failed to get status for pod" podUID="e48519c7-0cdc-419b-bd72-2bab0e911af8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.785793 4730 status_manager.go:851] "Failed to get status for pod" podUID="9d747680-5dde-4793-863a-252a5f67233a" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f56868448-2fbxh\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.786447 4730 status_manager.go:851] "Failed to get status for pod" podUID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-b75b5f765-8wjw2\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.786750 4730 status_manager.go:851] "Failed to get status for pod" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" pod="openshift-marketplace/redhat-marketplace-2z2hv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2z2hv\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.787041 4730 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.787327 4730 status_manager.go:851] "Failed to get status for pod" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" pod="openshift-marketplace/certified-operators-rlnqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rlnqc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.796375 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4d20fab-86cc-44d8-a8b9-c60f6835c5e0-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.796412 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d20fab-86cc-44d8-a8b9-c60f6835c5e0-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.796424 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d20fab-86cc-44d8-a8b9-c60f6835c5e0-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.796433 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xdnv\" (UniqueName: \"kubernetes.io/projected/d4d20fab-86cc-44d8-a8b9-c60f6835c5e0-kube-api-access-4xdnv\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.806775 4730 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.807308 4730 status_manager.go:851] "Failed to get status for pod" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" pod="openshift-marketplace/certified-operators-rlnqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rlnqc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.807691 4730 status_manager.go:851] "Failed to get status for pod" podUID="3f093381-3bf4-49ff-beb4-f44aa012c521" pod="openshift-infra/auto-csr-approver-29567024-s2r9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29567024-s2r9c\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.808022 4730 status_manager.go:851] "Failed to get status for pod" podUID="e48519c7-0cdc-419b-bd72-2bab0e911af8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.808324 4730 status_manager.go:851] "Failed to get status for pod" podUID="9d747680-5dde-4793-863a-252a5f67233a" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f56868448-2fbxh\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.808584 4730 status_manager.go:851] "Failed to get status for pod" podUID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-b75b5f765-8wjw2\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.808863 4730 status_manager.go:851] "Failed to get status for pod" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" pod="openshift-marketplace/redhat-marketplace-2z2hv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2z2hv\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.809272 4730 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.809527 4730 status_manager.go:851] "Failed to get status for pod" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" pod="openshift-marketplace/certified-operators-rlnqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rlnqc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.809820 4730 status_manager.go:851] "Failed to get status for pod" podUID="3f093381-3bf4-49ff-beb4-f44aa012c521" pod="openshift-infra/auto-csr-approver-29567024-s2r9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29567024-s2r9c\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.810036 4730 status_manager.go:851] "Failed to get status for pod" podUID="e48519c7-0cdc-419b-bd72-2bab0e911af8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.810404 4730 status_manager.go:851] "Failed to get status for pod" podUID="9d747680-5dde-4793-863a-252a5f67233a" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f56868448-2fbxh\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.810753 4730 status_manager.go:851] "Failed to get status for pod" podUID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-b75b5f765-8wjw2\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.811057 4730 status_manager.go:851] "Failed to get status for pod" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" pod="openshift-marketplace/redhat-marketplace-2z2hv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2z2hv\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.812959 4730 scope.go:117] "RemoveContainer" containerID="34175b3ad56804c8ddffbef2e3fafd18e67e87b94833ec1a23054a68de4fe0be" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.828394 4730 scope.go:117] "RemoveContainer" containerID="6a969344117b0e223a95c25576df0115301c9833e8c4f08723a3caa581f16b8b" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.834318 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rlnqc" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.834829 4730 status_manager.go:851] "Failed to get status for pod" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" pod="openshift-marketplace/certified-operators-rlnqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rlnqc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.835130 4730 status_manager.go:851] "Failed to get status for pod" podUID="3f093381-3bf4-49ff-beb4-f44aa012c521" pod="openshift-infra/auto-csr-approver-29567024-s2r9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29567024-s2r9c\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.835314 4730 status_manager.go:851] "Failed to get status for pod" podUID="e48519c7-0cdc-419b-bd72-2bab0e911af8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.835462 4730 status_manager.go:851] "Failed to get status for pod" podUID="9d747680-5dde-4793-863a-252a5f67233a" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f56868448-2fbxh\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.835606 4730 status_manager.go:851] "Failed to get status for pod" podUID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-b75b5f765-8wjw2\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.835753 4730 status_manager.go:851] "Failed to get status for pod" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" pod="openshift-marketplace/redhat-marketplace-2z2hv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2z2hv\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:07 crc kubenswrapper[4730]: I0320 15:44:07.836096 4730 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:08 crc kubenswrapper[4730]: E0320 15:44:08.364518 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:44:08Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:44:08Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:44:08Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:44:08Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4869c69128f74d9c3b178ea6c8c8d38df169b6bce05eb821a65f0aaf514c563a\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:f3c2ad90e251062165f8d6623ca4994c0b3e28324e4b5b17fd588b162ec97766\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1746912226},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:ce2b0bfbec08802afec185b6eebec59a5a016291cad2c3515b0d06af6c34fde3\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:d5656ec3f5691d96c2da35350810e5bd700559213851b28fc3523a059efce76f\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1252734685},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:422eef49f9b56aaa481c870199db8b853dba5d36f00adcc19d22a6960345f1cc\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:667c2f632cea73b8b5354e1fbd365169f285c9b8460c5e81f63967a72b8f90e8\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1223676630},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-cli@sha256:69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9\\\",\\\"registry.redhat.io/openshift4/ose-cli@sha256:ef83967297f619f45075e7fd1428a1eb981622a6c174c46fb53b158ed24bed85\\\",\\\"registry.redhat.io/openshift4/ose-cli:latest\\\"],\\\"sizeBytes\\\":584351326},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:08 crc kubenswrapper[4730]: E0320 15:44:08.365479 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:08 crc kubenswrapper[4730]: E0320 15:44:08.366329 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:08 crc kubenswrapper[4730]: E0320 15:44:08.366836 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:08 crc kubenswrapper[4730]: E0320 15:44:08.367113 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:08 crc kubenswrapper[4730]: E0320 15:44:08.367140 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.681080 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.682053 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.682702 4730 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.682972 4730 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.683218 4730 status_manager.go:851] "Failed to get status for pod" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" pod="openshift-marketplace/certified-operators-rlnqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rlnqc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.683452 4730 status_manager.go:851] "Failed to get status for pod" podUID="3f093381-3bf4-49ff-beb4-f44aa012c521" pod="openshift-infra/auto-csr-approver-29567024-s2r9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29567024-s2r9c\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.683690 4730 status_manager.go:851] "Failed to get status for pod" podUID="e48519c7-0cdc-419b-bd72-2bab0e911af8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.683906 4730 status_manager.go:851] "Failed to get status for pod" podUID="9d747680-5dde-4793-863a-252a5f67233a" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f56868448-2fbxh\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.684100 4730 status_manager.go:851] "Failed to get status for pod" podUID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-b75b5f765-8wjw2\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.684327 4730 status_manager.go:851] "Failed to get status for pod" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" pod="openshift-marketplace/redhat-marketplace-2z2hv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2z2hv\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.793973 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.795293 4730 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4" exitCode=0 Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.795455 4730 scope.go:117] "RemoveContainer" containerID="b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.795560 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.809208 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.809264 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.809296 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.809386 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.809382 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.809459 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.809775 4730 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.809801 4730 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.809829 4730 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.814173 4730 scope.go:117] "RemoveContainer" containerID="1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.825806 4730 scope.go:117] "RemoveContainer" containerID="5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.838299 4730 scope.go:117] "RemoveContainer" containerID="180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.853924 4730 scope.go:117] "RemoveContainer" containerID="2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.877617 4730 scope.go:117] "RemoveContainer" containerID="e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.898589 4730 scope.go:117] "RemoveContainer" containerID="b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad" Mar 20 15:44:08 crc kubenswrapper[4730]: E0320 15:44:08.903755 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\": container with ID starting with b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad not found: ID does not exist" containerID="b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.903788 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad"} err="failed to get container status \"b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\": rpc error: code = NotFound desc = could not find container \"b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad\": container with ID starting with b137ba07c1866334b1897d83f9590d1627b5724b950efaef0bbda497186fe1ad not found: ID does not exist" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.903826 4730 scope.go:117] "RemoveContainer" containerID="1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf" Mar 20 15:44:08 crc kubenswrapper[4730]: E0320 15:44:08.904848 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\": container with ID starting with 1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf not found: ID does not exist" containerID="1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.904889 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf"} err="failed to get container status \"1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\": rpc error: code = NotFound desc = could not find container \"1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf\": container with ID starting with 1f8e7f1bfc56e54242e12f30487e8120eeb3d85dc86cfc17b587255889bd60bf not found: ID does not exist" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.904904 4730 scope.go:117] "RemoveContainer" containerID="5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b" Mar 20 15:44:08 crc kubenswrapper[4730]: E0320 15:44:08.909445 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\": container with ID starting with 5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b not found: ID does not exist" containerID="5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.909541 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b"} err="failed to get container status \"5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\": rpc error: code = NotFound desc = could not find container \"5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b\": container with ID starting with 5d96c704317c9bfbcb499beae611ca6fcb6337084728df20ad791267de34359b not found: ID does not exist" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.909566 4730 scope.go:117] "RemoveContainer" containerID="180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007" Mar 20 15:44:08 crc kubenswrapper[4730]: E0320 15:44:08.910226 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\": container with ID starting with 180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007 not found: ID does not exist" containerID="180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.910292 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007"} err="failed to get container status \"180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\": rpc error: code = NotFound desc = could not find container \"180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007\": container with ID starting with 180d79b8131e00449715e74d883031078dcceb74ca44ea372b6c3f651d617007 not found: ID does not exist" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.910331 4730 scope.go:117] "RemoveContainer" containerID="2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4" Mar 20 15:44:08 crc kubenswrapper[4730]: E0320 15:44:08.910697 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\": container with ID starting with 2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4 not found: ID does not exist" containerID="2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.910741 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4"} err="failed to get container status \"2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\": rpc error: code = NotFound desc = could not find container \"2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4\": container with ID starting with 2660f772d76479685f586aeb9b242975695ac5e28668840bac6b8a75c09443a4 not found: ID does not exist" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.910756 4730 scope.go:117] "RemoveContainer" containerID="e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e" Mar 20 15:44:08 crc kubenswrapper[4730]: E0320 15:44:08.911080 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\": container with ID starting with e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e not found: ID does not exist" containerID="e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e" Mar 20 15:44:08 crc kubenswrapper[4730]: I0320 15:44:08.911141 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e"} err="failed to get container status \"e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\": rpc error: code = NotFound desc = could not find container \"e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e\": container with ID starting with e683393c25a03a11f7a3e2e4bc5231217db3fa7cd807971f61734acea801ca9e not found: ID does not exist" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.023763 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.024273 4730 status_manager.go:851] "Failed to get status for pod" podUID="e48519c7-0cdc-419b-bd72-2bab0e911af8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.024690 4730 status_manager.go:851] "Failed to get status for pod" podUID="9d747680-5dde-4793-863a-252a5f67233a" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f56868448-2fbxh\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.025087 4730 status_manager.go:851] "Failed to get status for pod" podUID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-b75b5f765-8wjw2\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.025442 4730 status_manager.go:851] "Failed to get status for pod" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" pod="openshift-marketplace/redhat-marketplace-2z2hv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2z2hv\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.025690 4730 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.025990 4730 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.026184 4730 status_manager.go:851] "Failed to get status for pod" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" pod="openshift-marketplace/certified-operators-rlnqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rlnqc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.026467 4730 status_manager.go:851] "Failed to get status for pod" podUID="3f093381-3bf4-49ff-beb4-f44aa012c521" pod="openshift-infra/auto-csr-approver-29567024-s2r9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29567024-s2r9c\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.027786 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567024-s2r9c" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.028140 4730 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.028346 4730 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.028593 4730 status_manager.go:851] "Failed to get status for pod" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" pod="openshift-marketplace/certified-operators-rlnqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rlnqc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.028845 4730 status_manager.go:851] "Failed to get status for pod" podUID="3f093381-3bf4-49ff-beb4-f44aa012c521" pod="openshift-infra/auto-csr-approver-29567024-s2r9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29567024-s2r9c\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.029090 4730 status_manager.go:851] "Failed to get status for pod" podUID="e48519c7-0cdc-419b-bd72-2bab0e911af8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.029360 4730 status_manager.go:851] "Failed to get status for pod" podUID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-b75b5f765-8wjw2\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.029596 4730 status_manager.go:851] "Failed to get status for pod" podUID="9d747680-5dde-4793-863a-252a5f67233a" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f56868448-2fbxh\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.029879 4730 status_manager.go:851] "Failed to get status for pod" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" pod="openshift-marketplace/redhat-marketplace-2z2hv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2z2hv\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.109701 4730 status_manager.go:851] "Failed to get status for pod" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" pod="openshift-marketplace/redhat-marketplace-2z2hv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2z2hv\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.110059 4730 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.110207 4730 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.110467 4730 status_manager.go:851] "Failed to get status for pod" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" pod="openshift-marketplace/certified-operators-rlnqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rlnqc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.110671 4730 status_manager.go:851] "Failed to get status for pod" podUID="3f093381-3bf4-49ff-beb4-f44aa012c521" pod="openshift-infra/auto-csr-approver-29567024-s2r9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29567024-s2r9c\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.110845 4730 status_manager.go:851] "Failed to get status for pod" podUID="e48519c7-0cdc-419b-bd72-2bab0e911af8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.111001 4730 status_manager.go:851] "Failed to get status for pod" podUID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-b75b5f765-8wjw2\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.111150 4730 status_manager.go:851] "Failed to get status for pod" podUID="9d747680-5dde-4793-863a-252a5f67233a" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f56868448-2fbxh\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.215768 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e48519c7-0cdc-419b-bd72-2bab0e911af8-kubelet-dir\") pod \"e48519c7-0cdc-419b-bd72-2bab0e911af8\" (UID: \"e48519c7-0cdc-419b-bd72-2bab0e911af8\") " Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.215951 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e48519c7-0cdc-419b-bd72-2bab0e911af8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e48519c7-0cdc-419b-bd72-2bab0e911af8" (UID: "e48519c7-0cdc-419b-bd72-2bab0e911af8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.216027 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e48519c7-0cdc-419b-bd72-2bab0e911af8-var-lock\") pod \"e48519c7-0cdc-419b-bd72-2bab0e911af8\" (UID: \"e48519c7-0cdc-419b-bd72-2bab0e911af8\") " Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.216110 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e48519c7-0cdc-419b-bd72-2bab0e911af8-var-lock" (OuterVolumeSpecName: "var-lock") pod "e48519c7-0cdc-419b-bd72-2bab0e911af8" (UID: "e48519c7-0cdc-419b-bd72-2bab0e911af8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.216113 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdc6x\" (UniqueName: \"kubernetes.io/projected/3f093381-3bf4-49ff-beb4-f44aa012c521-kube-api-access-kdc6x\") pod \"3f093381-3bf4-49ff-beb4-f44aa012c521\" (UID: \"3f093381-3bf4-49ff-beb4-f44aa012c521\") " Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.216333 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e48519c7-0cdc-419b-bd72-2bab0e911af8-kube-api-access\") pod \"e48519c7-0cdc-419b-bd72-2bab0e911af8\" (UID: \"e48519c7-0cdc-419b-bd72-2bab0e911af8\") " Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.216902 4730 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e48519c7-0cdc-419b-bd72-2bab0e911af8-var-lock\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.216933 4730 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e48519c7-0cdc-419b-bd72-2bab0e911af8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.222052 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e48519c7-0cdc-419b-bd72-2bab0e911af8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e48519c7-0cdc-419b-bd72-2bab0e911af8" (UID: "e48519c7-0cdc-419b-bd72-2bab0e911af8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.223129 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f093381-3bf4-49ff-beb4-f44aa012c521-kube-api-access-kdc6x" (OuterVolumeSpecName: "kube-api-access-kdc6x") pod "3f093381-3bf4-49ff-beb4-f44aa012c521" (UID: "3f093381-3bf4-49ff-beb4-f44aa012c521"). InnerVolumeSpecName "kube-api-access-kdc6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.318088 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e48519c7-0cdc-419b-bd72-2bab0e911af8-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.318128 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdc6x\" (UniqueName: \"kubernetes.io/projected/3f093381-3bf4-49ff-beb4-f44aa012c521-kube-api-access-kdc6x\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.540537 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.801955 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.802012 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e48519c7-0cdc-419b-bd72-2bab0e911af8","Type":"ContainerDied","Data":"5af4b462745de074ee5968bfbd84bbf7129ced0fcb060ef525aacd425c95e3c1"} Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.802676 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5af4b462745de074ee5968bfbd84bbf7129ced0fcb060ef525aacd425c95e3c1" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.804727 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567024-s2r9c" event={"ID":"3f093381-3bf4-49ff-beb4-f44aa012c521","Type":"ContainerDied","Data":"ae9eb78df15bf57e2c0e8bf27ad609713813b7d995be50840b72aa366815e8fc"} Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.804760 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae9eb78df15bf57e2c0e8bf27ad609713813b7d995be50840b72aa366815e8fc" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.804818 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567024-s2r9c" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.807795 4730 status_manager.go:851] "Failed to get status for pod" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" pod="openshift-marketplace/certified-operators-rlnqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rlnqc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.808086 4730 status_manager.go:851] "Failed to get status for pod" podUID="3f093381-3bf4-49ff-beb4-f44aa012c521" pod="openshift-infra/auto-csr-approver-29567024-s2r9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29567024-s2r9c\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.808411 4730 status_manager.go:851] "Failed to get status for pod" podUID="e48519c7-0cdc-419b-bd72-2bab0e911af8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.808620 4730 status_manager.go:851] "Failed to get status for pod" podUID="9d747680-5dde-4793-863a-252a5f67233a" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f56868448-2fbxh\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.808907 4730 status_manager.go:851] "Failed to get status for pod" podUID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-b75b5f765-8wjw2\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.809158 4730 status_manager.go:851] "Failed to get status for pod" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" pod="openshift-marketplace/redhat-marketplace-2z2hv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2z2hv\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.809371 4730 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.809677 4730 status_manager.go:851] "Failed to get status for pod" podUID="e48519c7-0cdc-419b-bd72-2bab0e911af8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.809906 4730 status_manager.go:851] "Failed to get status for pod" podUID="9d747680-5dde-4793-863a-252a5f67233a" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f56868448-2fbxh\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.810126 4730 status_manager.go:851] "Failed to get status for pod" podUID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-b75b5f765-8wjw2\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.810339 4730 status_manager.go:851] "Failed to get status for pod" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" pod="openshift-marketplace/redhat-marketplace-2z2hv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2z2hv\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.810506 4730 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.810760 4730 status_manager.go:851] "Failed to get status for pod" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" pod="openshift-marketplace/certified-operators-rlnqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rlnqc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:09 crc kubenswrapper[4730]: I0320 15:44:09.811001 4730 status_manager.go:851] "Failed to get status for pod" podUID="3f093381-3bf4-49ff-beb4-f44aa012c521" pod="openshift-infra/auto-csr-approver-29567024-s2r9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29567024-s2r9c\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:10 crc kubenswrapper[4730]: I0320 15:44:10.542009 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8rptq" Mar 20 15:44:10 crc kubenswrapper[4730]: I0320 15:44:10.542617 4730 status_manager.go:851] "Failed to get status for pod" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" pod="openshift-marketplace/redhat-marketplace-2z2hv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2z2hv\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:10 crc kubenswrapper[4730]: I0320 15:44:10.543221 4730 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:10 crc kubenswrapper[4730]: I0320 15:44:10.543616 4730 status_manager.go:851] "Failed to get status for pod" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" pod="openshift-marketplace/certified-operators-rlnqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rlnqc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:10 crc kubenswrapper[4730]: I0320 15:44:10.544022 4730 status_manager.go:851] "Failed to get status for pod" podUID="3f093381-3bf4-49ff-beb4-f44aa012c521" pod="openshift-infra/auto-csr-approver-29567024-s2r9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29567024-s2r9c\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:10 crc kubenswrapper[4730]: I0320 15:44:10.544209 4730 status_manager.go:851] "Failed to get status for pod" podUID="e48519c7-0cdc-419b-bd72-2bab0e911af8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:10 crc kubenswrapper[4730]: I0320 15:44:10.544396 4730 status_manager.go:851] "Failed to get status for pod" podUID="558b00fd-2589-4842-8cba-db0cffe8c826" pod="openshift-marketplace/redhat-operators-8rptq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8rptq\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:10 crc kubenswrapper[4730]: I0320 15:44:10.544557 4730 status_manager.go:851] "Failed to get status for pod" podUID="9d747680-5dde-4793-863a-252a5f67233a" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f56868448-2fbxh\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:10 crc kubenswrapper[4730]: I0320 15:44:10.544716 4730 status_manager.go:851] "Failed to get status for pod" podUID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-b75b5f765-8wjw2\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:10 crc kubenswrapper[4730]: I0320 15:44:10.579183 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8rptq" Mar 20 15:44:10 crc kubenswrapper[4730]: I0320 15:44:10.579889 4730 status_manager.go:851] "Failed to get status for pod" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" pod="openshift-marketplace/certified-operators-rlnqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rlnqc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:10 crc kubenswrapper[4730]: I0320 15:44:10.580342 4730 status_manager.go:851] "Failed to get status for pod" podUID="3f093381-3bf4-49ff-beb4-f44aa012c521" pod="openshift-infra/auto-csr-approver-29567024-s2r9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29567024-s2r9c\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:10 crc kubenswrapper[4730]: I0320 15:44:10.580593 4730 status_manager.go:851] "Failed to get status for pod" podUID="e48519c7-0cdc-419b-bd72-2bab0e911af8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:10 crc kubenswrapper[4730]: I0320 15:44:10.580832 4730 status_manager.go:851] "Failed to get status for pod" podUID="558b00fd-2589-4842-8cba-db0cffe8c826" pod="openshift-marketplace/redhat-operators-8rptq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8rptq\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:10 crc kubenswrapper[4730]: I0320 15:44:10.581067 4730 status_manager.go:851] "Failed to get status for pod" podUID="9d747680-5dde-4793-863a-252a5f67233a" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f56868448-2fbxh\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:10 crc kubenswrapper[4730]: I0320 15:44:10.581329 4730 status_manager.go:851] "Failed to get status for pod" podUID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-b75b5f765-8wjw2\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:10 crc kubenswrapper[4730]: I0320 15:44:10.581539 4730 status_manager.go:851] "Failed to get status for pod" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" pod="openshift-marketplace/redhat-marketplace-2z2hv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2z2hv\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:10 crc kubenswrapper[4730]: I0320 15:44:10.581741 4730 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:11 crc kubenswrapper[4730]: I0320 15:44:11.009162 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qmxvf" Mar 20 15:44:11 crc kubenswrapper[4730]: I0320 15:44:11.009778 4730 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:11 crc kubenswrapper[4730]: I0320 15:44:11.010165 4730 status_manager.go:851] "Failed to get status for pod" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" pod="openshift-marketplace/certified-operators-rlnqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rlnqc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:11 crc kubenswrapper[4730]: I0320 15:44:11.010468 4730 status_manager.go:851] "Failed to get status for pod" podUID="3f093381-3bf4-49ff-beb4-f44aa012c521" pod="openshift-infra/auto-csr-approver-29567024-s2r9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29567024-s2r9c\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:11 crc kubenswrapper[4730]: I0320 15:44:11.010714 4730 status_manager.go:851] "Failed to get status for pod" podUID="ab6c90a0-1bc1-476d-8526-d1fe438163e3" pod="openshift-marketplace/redhat-operators-qmxvf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qmxvf\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:11 crc kubenswrapper[4730]: I0320 15:44:11.010977 4730 status_manager.go:851] "Failed to get status for pod" podUID="e48519c7-0cdc-419b-bd72-2bab0e911af8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:11 crc kubenswrapper[4730]: I0320 15:44:11.011208 4730 status_manager.go:851] "Failed to get status for pod" podUID="558b00fd-2589-4842-8cba-db0cffe8c826" pod="openshift-marketplace/redhat-operators-8rptq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8rptq\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:11 crc kubenswrapper[4730]: I0320 15:44:11.011459 4730 status_manager.go:851] "Failed to get status for pod" podUID="9d747680-5dde-4793-863a-252a5f67233a" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f56868448-2fbxh\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:11 crc kubenswrapper[4730]: I0320 15:44:11.011781 4730 status_manager.go:851] "Failed to get status for pod" podUID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-b75b5f765-8wjw2\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:11 crc kubenswrapper[4730]: I0320 15:44:11.012172 4730 status_manager.go:851] "Failed to get status for pod" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" pod="openshift-marketplace/redhat-marketplace-2z2hv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2z2hv\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:11 crc kubenswrapper[4730]: I0320 15:44:11.046852 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qmxvf" Mar 20 15:44:11 crc kubenswrapper[4730]: I0320 15:44:11.047656 4730 status_manager.go:851] "Failed to get status for pod" podUID="3f093381-3bf4-49ff-beb4-f44aa012c521" pod="openshift-infra/auto-csr-approver-29567024-s2r9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29567024-s2r9c\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:11 crc kubenswrapper[4730]: I0320 15:44:11.048027 4730 status_manager.go:851] "Failed to get status for pod" podUID="e48519c7-0cdc-419b-bd72-2bab0e911af8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:11 crc kubenswrapper[4730]: I0320 15:44:11.048553 4730 status_manager.go:851] "Failed to get status for pod" podUID="ab6c90a0-1bc1-476d-8526-d1fe438163e3" pod="openshift-marketplace/redhat-operators-qmxvf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qmxvf\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:11 crc kubenswrapper[4730]: I0320 15:44:11.048877 4730 status_manager.go:851] "Failed to get status for pod" podUID="558b00fd-2589-4842-8cba-db0cffe8c826" pod="openshift-marketplace/redhat-operators-8rptq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8rptq\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:11 crc kubenswrapper[4730]: I0320 15:44:11.049219 4730 status_manager.go:851] "Failed to get status for pod" podUID="9d747680-5dde-4793-863a-252a5f67233a" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f56868448-2fbxh\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:11 crc kubenswrapper[4730]: I0320 15:44:11.049502 4730 status_manager.go:851] "Failed to get status for pod" podUID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-b75b5f765-8wjw2\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:11 crc kubenswrapper[4730]: I0320 15:44:11.049754 4730 status_manager.go:851] "Failed to get status for pod" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" pod="openshift-marketplace/redhat-marketplace-2z2hv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2z2hv\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:11 crc kubenswrapper[4730]: I0320 15:44:11.049937 4730 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:11 crc kubenswrapper[4730]: I0320 15:44:11.050192 4730 status_manager.go:851] "Failed to get status for pod" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" pod="openshift-marketplace/certified-operators-rlnqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rlnqc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:11 crc kubenswrapper[4730]: I0320 15:44:11.989229 4730 status_manager.go:851] "Failed to get status for pod" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" pod="openshift-marketplace/redhat-marketplace-2z2hv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2z2hv\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:11 crc kubenswrapper[4730]: I0320 15:44:11.993871 4730 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:11 crc kubenswrapper[4730]: I0320 15:44:11.994167 4730 status_manager.go:851] "Failed to get status for pod" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" pod="openshift-marketplace/certified-operators-rlnqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rlnqc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:11 crc kubenswrapper[4730]: I0320 15:44:11.994401 4730 status_manager.go:851] "Failed to get status for pod" podUID="3f093381-3bf4-49ff-beb4-f44aa012c521" pod="openshift-infra/auto-csr-approver-29567024-s2r9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29567024-s2r9c\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:11 crc kubenswrapper[4730]: I0320 15:44:11.994614 4730 status_manager.go:851] "Failed to get status for pod" podUID="e48519c7-0cdc-419b-bd72-2bab0e911af8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:11 crc kubenswrapper[4730]: I0320 15:44:11.994930 4730 status_manager.go:851] "Failed to get status for pod" podUID="ab6c90a0-1bc1-476d-8526-d1fe438163e3" pod="openshift-marketplace/redhat-operators-qmxvf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qmxvf\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:11 crc kubenswrapper[4730]: I0320 15:44:11.995473 4730 status_manager.go:851] "Failed to get status for pod" podUID="558b00fd-2589-4842-8cba-db0cffe8c826" pod="openshift-marketplace/redhat-operators-8rptq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8rptq\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:12 crc kubenswrapper[4730]: I0320 15:44:12.002297 4730 status_manager.go:851] "Failed to get status for pod" podUID="9d747680-5dde-4793-863a-252a5f67233a" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f56868448-2fbxh\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:12 crc kubenswrapper[4730]: I0320 15:44:12.002669 4730 status_manager.go:851] "Failed to get status for pod" podUID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-b75b5f765-8wjw2\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:12 crc kubenswrapper[4730]: I0320 15:44:12.879651 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 15:44:12 crc kubenswrapper[4730]: I0320 15:44:12.879725 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 15:44:12 crc kubenswrapper[4730]: I0320 15:44:12.879789 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 15:44:12 crc kubenswrapper[4730]: I0320 15:44:12.880510 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0"} pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 15:44:12 crc kubenswrapper[4730]: I0320 15:44:12.880568 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" containerID="cri-o://cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0" gracePeriod=600 Mar 20 15:44:13 crc kubenswrapper[4730]: I0320 15:44:13.008571 4730 generic.go:334] "Generic (PLEG): container finished" podID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerID="cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0" exitCode=0 Mar 20 15:44:13 crc kubenswrapper[4730]: I0320 15:44:13.008628 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerDied","Data":"cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0"} Mar 20 15:44:14 crc kubenswrapper[4730]: I0320 15:44:14.018339 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerStarted","Data":"418b64bd31efa72e03b6036c281348bfc6e1d5be086f3887fe653df9e0316583"} Mar 20 15:44:14 crc kubenswrapper[4730]: I0320 15:44:14.019075 4730 status_manager.go:851] "Failed to get status for pod" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" pod="openshift-marketplace/certified-operators-rlnqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rlnqc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:14 crc kubenswrapper[4730]: I0320 15:44:14.019505 4730 status_manager.go:851] "Failed to get status for pod" podUID="3f093381-3bf4-49ff-beb4-f44aa012c521" pod="openshift-infra/auto-csr-approver-29567024-s2r9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29567024-s2r9c\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:14 crc kubenswrapper[4730]: I0320 15:44:14.019886 4730 status_manager.go:851] "Failed to get status for pod" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-p5qvf\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:14 crc kubenswrapper[4730]: I0320 15:44:14.020176 4730 status_manager.go:851] "Failed to get status for pod" podUID="ab6c90a0-1bc1-476d-8526-d1fe438163e3" pod="openshift-marketplace/redhat-operators-qmxvf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qmxvf\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:14 crc kubenswrapper[4730]: I0320 15:44:14.020449 4730 status_manager.go:851] "Failed to get status for pod" podUID="e48519c7-0cdc-419b-bd72-2bab0e911af8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:14 crc kubenswrapper[4730]: I0320 15:44:14.020757 4730 status_manager.go:851] "Failed to get status for pod" podUID="558b00fd-2589-4842-8cba-db0cffe8c826" pod="openshift-marketplace/redhat-operators-8rptq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8rptq\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:14 crc kubenswrapper[4730]: I0320 15:44:14.021003 4730 status_manager.go:851] "Failed to get status for pod" podUID="9d747680-5dde-4793-863a-252a5f67233a" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f56868448-2fbxh\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:14 crc kubenswrapper[4730]: I0320 15:44:14.021261 4730 status_manager.go:851] "Failed to get status for pod" podUID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-b75b5f765-8wjw2\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:14 crc kubenswrapper[4730]: I0320 15:44:14.021616 4730 status_manager.go:851] "Failed to get status for pod" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" pod="openshift-marketplace/redhat-marketplace-2z2hv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2z2hv\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:14 crc kubenswrapper[4730]: I0320 15:44:14.021857 4730 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:15 crc kubenswrapper[4730]: E0320 15:44:15.939498 4730 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:15 crc kubenswrapper[4730]: E0320 15:44:15.939953 4730 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:15 crc kubenswrapper[4730]: E0320 15:44:15.940603 4730 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:15 crc kubenswrapper[4730]: E0320 15:44:15.940974 4730 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:15 crc kubenswrapper[4730]: E0320 15:44:15.941460 4730 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:15 crc kubenswrapper[4730]: I0320 15:44:15.941506 4730 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 20 15:44:15 crc kubenswrapper[4730]: E0320 15:44:15.941974 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="200ms" Mar 20 15:44:16 crc kubenswrapper[4730]: E0320 15:44:16.142675 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="400ms" Mar 20 15:44:16 crc kubenswrapper[4730]: E0320 15:44:16.544300 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="800ms" Mar 20 15:44:17 crc kubenswrapper[4730]: E0320 15:44:17.345081 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="1.6s" Mar 20 15:44:17 crc kubenswrapper[4730]: E0320 15:44:17.555942 4730 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.162:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" volumeName="registry-storage" Mar 20 15:44:17 crc kubenswrapper[4730]: E0320 15:44:17.699625 4730 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/events\": dial tcp 38.102.83.162:6443: connect: connection refused" event=< Mar 20 15:44:17 crc kubenswrapper[4730]: &Event{ObjectMeta:{controller-manager-6f56868448-2fbxh.189e971b4120d4fc openshift-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager,Name:controller-manager-6f56868448-2fbxh,UID:9d747680-5dde-4793-863a-252a5f67233a,APIVersion:v1,ResourceVersion:29368,FieldPath:spec.containers{controller-manager},},Reason:ProbeError,Message:Readiness probe error: Get "https://10.217.0.61:8443/healthz": dial tcp 10.217.0.61:8443: connect: connection refused Mar 20 15:44:17 crc kubenswrapper[4730]: body: Mar 20 15:44:17 crc kubenswrapper[4730]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 15:44:06.543135996 +0000 UTC m=+305.756507365,LastTimestamp:2026-03-20 15:44:06.543135996 +0000 UTC m=+305.756507365,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 15:44:17 crc kubenswrapper[4730]: > Mar 20 15:44:18 crc kubenswrapper[4730]: E0320 15:44:18.640724 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:44:18Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:44:18Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:44:18Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T15:44:18Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4869c69128f74d9c3b178ea6c8c8d38df169b6bce05eb821a65f0aaf514c563a\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:f3c2ad90e251062165f8d6623ca4994c0b3e28324e4b5b17fd588b162ec97766\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1746912226},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:ce2b0bfbec08802afec185b6eebec59a5a016291cad2c3515b0d06af6c34fde3\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:d5656ec3f5691d96c2da35350810e5bd700559213851b28fc3523a059efce76f\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1252734685},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:422eef49f9b56aaa481c870199db8b853dba5d36f00adcc19d22a6960345f1cc\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:667c2f632cea73b8b5354e1fbd365169f285c9b8460c5e81f63967a72b8f90e8\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1223676630},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-cli@sha256:69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9\\\",\\\"registry.redhat.io/openshift4/ose-cli@sha256:ef83967297f619f45075e7fd1428a1eb981622a6c174c46fb53b158ed24bed85\\\",\\\"registry.redhat.io/openshift4/ose-cli:latest\\\"],\\\"sizeBytes\\\":584351326},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:18 crc kubenswrapper[4730]: E0320 15:44:18.641138 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:18 crc kubenswrapper[4730]: E0320 15:44:18.641383 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:18 crc kubenswrapper[4730]: E0320 15:44:18.641553 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:18 crc kubenswrapper[4730]: E0320 15:44:18.641896 4730 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:18 crc kubenswrapper[4730]: E0320 15:44:18.641926 4730 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 15:44:18 crc kubenswrapper[4730]: E0320 15:44:18.945828 4730 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="3.2s" Mar 20 15:44:19 crc kubenswrapper[4730]: I0320 15:44:19.533094 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:44:19 crc kubenswrapper[4730]: I0320 15:44:19.534801 4730 status_manager.go:851] "Failed to get status for pod" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" pod="openshift-marketplace/certified-operators-rlnqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rlnqc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:19 crc kubenswrapper[4730]: I0320 15:44:19.535527 4730 status_manager.go:851] "Failed to get status for pod" podUID="3f093381-3bf4-49ff-beb4-f44aa012c521" pod="openshift-infra/auto-csr-approver-29567024-s2r9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29567024-s2r9c\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:19 crc kubenswrapper[4730]: I0320 15:44:19.535949 4730 status_manager.go:851] "Failed to get status for pod" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-p5qvf\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:19 crc kubenswrapper[4730]: I0320 15:44:19.536422 4730 status_manager.go:851] "Failed to get status for pod" podUID="ab6c90a0-1bc1-476d-8526-d1fe438163e3" pod="openshift-marketplace/redhat-operators-qmxvf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qmxvf\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:19 crc kubenswrapper[4730]: I0320 15:44:19.536821 4730 status_manager.go:851] "Failed to get status for pod" podUID="e48519c7-0cdc-419b-bd72-2bab0e911af8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:19 crc kubenswrapper[4730]: I0320 15:44:19.537302 4730 status_manager.go:851] "Failed to get status for pod" podUID="558b00fd-2589-4842-8cba-db0cffe8c826" pod="openshift-marketplace/redhat-operators-8rptq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8rptq\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:19 crc kubenswrapper[4730]: I0320 15:44:19.537715 4730 status_manager.go:851] "Failed to get status for pod" podUID="9d747680-5dde-4793-863a-252a5f67233a" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f56868448-2fbxh\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:19 crc kubenswrapper[4730]: I0320 15:44:19.538133 4730 status_manager.go:851] "Failed to get status for pod" podUID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-b75b5f765-8wjw2\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:19 crc kubenswrapper[4730]: I0320 15:44:19.538617 4730 status_manager.go:851] "Failed to get status for pod" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" pod="openshift-marketplace/redhat-marketplace-2z2hv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2z2hv\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:19 crc kubenswrapper[4730]: I0320 15:44:19.539672 4730 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:19 crc kubenswrapper[4730]: I0320 15:44:19.554340 4730 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5c3fe2f-3c67-4dee-becb-3ecfe2758384" Mar 20 15:44:19 crc kubenswrapper[4730]: I0320 15:44:19.554402 4730 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5c3fe2f-3c67-4dee-becb-3ecfe2758384" Mar 20 15:44:19 crc kubenswrapper[4730]: E0320 15:44:19.555070 4730 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:44:19 crc kubenswrapper[4730]: I0320 15:44:19.555874 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:44:19 crc kubenswrapper[4730]: W0320 15:44:19.587110 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-423757f9018398c0322c14eda69c0767c93dd2116d3b88327e7c8f1d21a1bb58 WatchSource:0}: Error finding container 423757f9018398c0322c14eda69c0767c93dd2116d3b88327e7c8f1d21a1bb58: Status 404 returned error can't find the container with id 423757f9018398c0322c14eda69c0767c93dd2116d3b88327e7c8f1d21a1bb58 Mar 20 15:44:20 crc kubenswrapper[4730]: I0320 15:44:20.049498 4730 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="773693b0b72251a6e1ea08a21557ab2598e497198dbc99a6932258ce18180350" exitCode=0 Mar 20 15:44:20 crc kubenswrapper[4730]: I0320 15:44:20.049580 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"773693b0b72251a6e1ea08a21557ab2598e497198dbc99a6932258ce18180350"} Mar 20 15:44:20 crc kubenswrapper[4730]: I0320 15:44:20.049847 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"423757f9018398c0322c14eda69c0767c93dd2116d3b88327e7c8f1d21a1bb58"} Mar 20 15:44:20 crc kubenswrapper[4730]: I0320 15:44:20.050188 4730 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5c3fe2f-3c67-4dee-becb-3ecfe2758384" Mar 20 15:44:20 crc kubenswrapper[4730]: I0320 15:44:20.050205 4730 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5c3fe2f-3c67-4dee-becb-3ecfe2758384" Mar 20 15:44:20 crc kubenswrapper[4730]: I0320 15:44:20.050648 4730 status_manager.go:851] "Failed to get status for pod" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" pod="openshift-marketplace/redhat-marketplace-2z2hv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2z2hv\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:20 crc kubenswrapper[4730]: E0320 15:44:20.050824 4730 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:44:20 crc kubenswrapper[4730]: I0320 15:44:20.050970 4730 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:20 crc kubenswrapper[4730]: I0320 15:44:20.051416 4730 status_manager.go:851] "Failed to get status for pod" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" pod="openshift-marketplace/certified-operators-rlnqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rlnqc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:20 crc kubenswrapper[4730]: I0320 15:44:20.052063 4730 status_manager.go:851] "Failed to get status for pod" podUID="3f093381-3bf4-49ff-beb4-f44aa012c521" pod="openshift-infra/auto-csr-approver-29567024-s2r9c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29567024-s2r9c\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:20 crc kubenswrapper[4730]: I0320 15:44:20.052369 4730 status_manager.go:851] "Failed to get status for pod" podUID="ab6c90a0-1bc1-476d-8526-d1fe438163e3" pod="openshift-marketplace/redhat-operators-qmxvf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qmxvf\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:20 crc kubenswrapper[4730]: I0320 15:44:20.052766 4730 status_manager.go:851] "Failed to get status for pod" podUID="e48519c7-0cdc-419b-bd72-2bab0e911af8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:20 crc kubenswrapper[4730]: I0320 15:44:20.053367 4730 status_manager.go:851] "Failed to get status for pod" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-p5qvf\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:20 crc kubenswrapper[4730]: I0320 15:44:20.053724 4730 status_manager.go:851] "Failed to get status for pod" podUID="558b00fd-2589-4842-8cba-db0cffe8c826" pod="openshift-marketplace/redhat-operators-8rptq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8rptq\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:20 crc kubenswrapper[4730]: I0320 15:44:20.054155 4730 status_manager.go:851] "Failed to get status for pod" podUID="9d747680-5dde-4793-863a-252a5f67233a" pod="openshift-controller-manager/controller-manager-6f56868448-2fbxh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6f56868448-2fbxh\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:20 crc kubenswrapper[4730]: I0320 15:44:20.054452 4730 status_manager.go:851] "Failed to get status for pod" podUID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" pod="openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-b75b5f765-8wjw2\": dial tcp 38.102.83.162:6443: connect: connection refused" Mar 20 15:44:20 crc kubenswrapper[4730]: I0320 15:44:20.890024 4730 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 20 15:44:20 crc kubenswrapper[4730]: I0320 15:44:20.890084 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 20 15:44:21 crc kubenswrapper[4730]: I0320 15:44:21.058468 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 20 15:44:21 crc kubenswrapper[4730]: I0320 15:44:21.059022 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 20 15:44:21 crc kubenswrapper[4730]: I0320 15:44:21.059067 4730 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="1aee2dcf43ecf6df4a1615aa6e468921053ccb529d3c6dbc2c2ad641e264e606" exitCode=1 Mar 20 15:44:21 crc kubenswrapper[4730]: I0320 15:44:21.059100 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"1aee2dcf43ecf6df4a1615aa6e468921053ccb529d3c6dbc2c2ad641e264e606"} Mar 20 15:44:21 crc kubenswrapper[4730]: I0320 15:44:21.059852 4730 scope.go:117] "RemoveContainer" containerID="1aee2dcf43ecf6df4a1615aa6e468921053ccb529d3c6dbc2c2ad641e264e606" Mar 20 15:44:21 crc kubenswrapper[4730]: I0320 15:44:21.062620 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b7515709c29b59886b5f3c0e0fd7f770aabc4ca181d727a6ffd015554050aa13"} Mar 20 15:44:21 crc kubenswrapper[4730]: I0320 15:44:21.062648 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"babadf9682b7a455a0d01b3d1233fc95ac577d6dae2dae96af4cdb0a6f9b912f"} Mar 20 15:44:21 crc kubenswrapper[4730]: I0320 15:44:21.062657 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2129bfdd6bb9819f82e5914936de1b5eb2b0d8d4592165d70f6b0017ad7598f4"} Mar 20 15:44:21 crc kubenswrapper[4730]: I0320 15:44:21.062667 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"60fd03149c83fb8a865b1433e87a2b82d3b27fda19034cc39ecc33663eb285bf"} Mar 20 15:44:21 crc kubenswrapper[4730]: I0320 15:44:21.062677 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f854d225e50ffbb4730722c7c45e64b1732db4cfc74820d3efd2e24b32d69b42"} Mar 20 15:44:21 crc kubenswrapper[4730]: I0320 15:44:21.062879 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:44:21 crc kubenswrapper[4730]: I0320 15:44:21.062884 4730 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5c3fe2f-3c67-4dee-becb-3ecfe2758384" Mar 20 15:44:21 crc kubenswrapper[4730]: I0320 15:44:21.062904 4730 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5c3fe2f-3c67-4dee-becb-3ecfe2758384" Mar 20 15:44:22 crc kubenswrapper[4730]: I0320 15:44:22.101904 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 20 15:44:22 crc kubenswrapper[4730]: I0320 15:44:22.103070 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 20 15:44:22 crc kubenswrapper[4730]: I0320 15:44:22.103107 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e1f2bc557068d23d31fb94ba0b8755d440a6066e6a0d9e74613e4436d64e826e"} Mar 20 15:44:22 crc kubenswrapper[4730]: I0320 15:44:22.158512 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 15:44:22 crc kubenswrapper[4730]: I0320 15:44:22.158604 4730 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 20 15:44:22 crc kubenswrapper[4730]: I0320 15:44:22.158638 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 20 15:44:22 crc kubenswrapper[4730]: I0320 15:44:22.660867 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:44:22 crc kubenswrapper[4730]: I0320 15:44:22.661049 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:44:22 crc kubenswrapper[4730]: I0320 15:44:22.661086 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:44:22 crc kubenswrapper[4730]: I0320 15:44:22.661123 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:44:22 crc kubenswrapper[4730]: I0320 15:44:22.868888 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs\") pod \"network-metrics-daemon-2prfn\" (UID: \"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\") " pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.153276 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-st79s" podUID="2499559b-b31f-4dab-89a0-964964dc596e" containerName="oauth-openshift" containerID="cri-o://e1c21b159517c024d2850d533108f097e6b934c21c31f9514baab038d75a1db0" gracePeriod=15 Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.506881 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:44:23 crc kubenswrapper[4730]: E0320 15:44:23.661954 4730 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: failed to sync secret cache: timed out waiting for the condition Mar 20 15:44:23 crc kubenswrapper[4730]: E0320 15:44:23.661998 4730 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: failed to sync configmap cache: timed out waiting for the condition Mar 20 15:44:23 crc kubenswrapper[4730]: E0320 15:44:23.662047 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 15:46:25.66202742 +0000 UTC m=+444.875398789 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync secret cache: timed out waiting for the condition Mar 20 15:44:23 crc kubenswrapper[4730]: E0320 15:44:23.662075 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 20 15:44:23 crc kubenswrapper[4730]: E0320 15:44:23.662086 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 15:46:25.662063651 +0000 UTC m=+444.875435070 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync configmap cache: timed out waiting for the condition Mar 20 15:44:23 crc kubenswrapper[4730]: E0320 15:44:23.662138 4730 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.663581 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 20 15:44:23 crc kubenswrapper[4730]: E0320 15:44:23.672767 4730 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: failed to sync configmap cache: timed out waiting for the condition Mar 20 15:44:23 crc kubenswrapper[4730]: E0320 15:44:23.672820 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 15:46:25.672806911 +0000 UTC m=+444.886178280 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : failed to sync configmap cache: timed out waiting for the condition Mar 20 15:44:23 crc kubenswrapper[4730]: E0320 15:44:23.672845 4730 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: failed to sync configmap cache: timed out waiting for the condition Mar 20 15:44:23 crc kubenswrapper[4730]: E0320 15:44:23.672870 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 15:46:25.672864222 +0000 UTC m=+444.886235591 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : failed to sync configmap cache: timed out waiting for the condition Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.680368 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-ocp-branding-template\") pod \"2499559b-b31f-4dab-89a0-964964dc596e\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.680409 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2499559b-b31f-4dab-89a0-964964dc596e-audit-policies\") pod \"2499559b-b31f-4dab-89a0-964964dc596e\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.680441 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-serving-cert\") pod \"2499559b-b31f-4dab-89a0-964964dc596e\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.680479 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2499559b-b31f-4dab-89a0-964964dc596e-audit-dir\") pod \"2499559b-b31f-4dab-89a0-964964dc596e\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.680525 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-user-template-login\") pod \"2499559b-b31f-4dab-89a0-964964dc596e\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.680559 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-session\") pod \"2499559b-b31f-4dab-89a0-964964dc596e\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.680591 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-user-idp-0-file-data\") pod \"2499559b-b31f-4dab-89a0-964964dc596e\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.680602 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2499559b-b31f-4dab-89a0-964964dc596e-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "2499559b-b31f-4dab-89a0-964964dc596e" (UID: "2499559b-b31f-4dab-89a0-964964dc596e"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.681362 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2499559b-b31f-4dab-89a0-964964dc596e-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "2499559b-b31f-4dab-89a0-964964dc596e" (UID: "2499559b-b31f-4dab-89a0-964964dc596e"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.681629 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-cliconfig\") pod \"2499559b-b31f-4dab-89a0-964964dc596e\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.681656 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-trusted-ca-bundle\") pod \"2499559b-b31f-4dab-89a0-964964dc596e\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.681700 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-user-template-error\") pod \"2499559b-b31f-4dab-89a0-964964dc596e\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.681732 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-user-template-provider-selection\") pod \"2499559b-b31f-4dab-89a0-964964dc596e\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.681761 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-service-ca\") pod \"2499559b-b31f-4dab-89a0-964964dc596e\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.681786 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-router-certs\") pod \"2499559b-b31f-4dab-89a0-964964dc596e\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.681813 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5z67\" (UniqueName: \"kubernetes.io/projected/2499559b-b31f-4dab-89a0-964964dc596e-kube-api-access-l5z67\") pod \"2499559b-b31f-4dab-89a0-964964dc596e\" (UID: \"2499559b-b31f-4dab-89a0-964964dc596e\") " Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.682078 4730 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2499559b-b31f-4dab-89a0-964964dc596e-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.682101 4730 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2499559b-b31f-4dab-89a0-964964dc596e-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.684837 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "2499559b-b31f-4dab-89a0-964964dc596e" (UID: "2499559b-b31f-4dab-89a0-964964dc596e"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.685434 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "2499559b-b31f-4dab-89a0-964964dc596e" (UID: "2499559b-b31f-4dab-89a0-964964dc596e"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.686392 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "2499559b-b31f-4dab-89a0-964964dc596e" (UID: "2499559b-b31f-4dab-89a0-964964dc596e"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.686700 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2499559b-b31f-4dab-89a0-964964dc596e-kube-api-access-l5z67" (OuterVolumeSpecName: "kube-api-access-l5z67") pod "2499559b-b31f-4dab-89a0-964964dc596e" (UID: "2499559b-b31f-4dab-89a0-964964dc596e"). InnerVolumeSpecName "kube-api-access-l5z67". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.687232 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "2499559b-b31f-4dab-89a0-964964dc596e" (UID: "2499559b-b31f-4dab-89a0-964964dc596e"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.687747 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "2499559b-b31f-4dab-89a0-964964dc596e" (UID: "2499559b-b31f-4dab-89a0-964964dc596e"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.687886 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "2499559b-b31f-4dab-89a0-964964dc596e" (UID: "2499559b-b31f-4dab-89a0-964964dc596e"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.688184 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "2499559b-b31f-4dab-89a0-964964dc596e" (UID: "2499559b-b31f-4dab-89a0-964964dc596e"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.688285 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "2499559b-b31f-4dab-89a0-964964dc596e" (UID: "2499559b-b31f-4dab-89a0-964964dc596e"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.688423 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "2499559b-b31f-4dab-89a0-964964dc596e" (UID: "2499559b-b31f-4dab-89a0-964964dc596e"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.688449 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "2499559b-b31f-4dab-89a0-964964dc596e" (UID: "2499559b-b31f-4dab-89a0-964964dc596e"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.688614 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "2499559b-b31f-4dab-89a0-964964dc596e" (UID: "2499559b-b31f-4dab-89a0-964964dc596e"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.782717 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.782753 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.782764 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.782773 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.782784 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5z67\" (UniqueName: \"kubernetes.io/projected/2499559b-b31f-4dab-89a0-964964dc596e-kube-api-access-l5z67\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.782796 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.782808 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.782820 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.782832 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.782842 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.782850 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:23 crc kubenswrapper[4730]: I0320 15:44:23.782860 4730 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2499559b-b31f-4dab-89a0-964964dc596e-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:23 crc kubenswrapper[4730]: E0320 15:44:23.869973 4730 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: failed to sync secret cache: timed out waiting for the condition Mar 20 15:44:23 crc kubenswrapper[4730]: E0320 15:44:23.870055 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs podName:db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a nodeName:}" failed. No retries permitted until 2026-03-20 15:46:25.870035976 +0000 UTC m=+445.083407335 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs") pod "network-metrics-daemon-2prfn" (UID: "db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a") : failed to sync secret cache: timed out waiting for the condition Mar 20 15:44:24 crc kubenswrapper[4730]: I0320 15:44:24.117186 4730 generic.go:334] "Generic (PLEG): container finished" podID="2499559b-b31f-4dab-89a0-964964dc596e" containerID="e1c21b159517c024d2850d533108f097e6b934c21c31f9514baab038d75a1db0" exitCode=0 Mar 20 15:44:24 crc kubenswrapper[4730]: I0320 15:44:24.117233 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-st79s" event={"ID":"2499559b-b31f-4dab-89a0-964964dc596e","Type":"ContainerDied","Data":"e1c21b159517c024d2850d533108f097e6b934c21c31f9514baab038d75a1db0"} Mar 20 15:44:24 crc kubenswrapper[4730]: I0320 15:44:24.117259 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-st79s" Mar 20 15:44:24 crc kubenswrapper[4730]: I0320 15:44:24.117285 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-st79s" event={"ID":"2499559b-b31f-4dab-89a0-964964dc596e","Type":"ContainerDied","Data":"40fd46e178b8ce8c75de09a0315b49f9c04961cf890e7392083f9a7a77124dd2"} Mar 20 15:44:24 crc kubenswrapper[4730]: I0320 15:44:24.117307 4730 scope.go:117] "RemoveContainer" containerID="e1c21b159517c024d2850d533108f097e6b934c21c31f9514baab038d75a1db0" Mar 20 15:44:24 crc kubenswrapper[4730]: I0320 15:44:24.135787 4730 scope.go:117] "RemoveContainer" containerID="e1c21b159517c024d2850d533108f097e6b934c21c31f9514baab038d75a1db0" Mar 20 15:44:24 crc kubenswrapper[4730]: E0320 15:44:24.136264 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1c21b159517c024d2850d533108f097e6b934c21c31f9514baab038d75a1db0\": container with ID starting with e1c21b159517c024d2850d533108f097e6b934c21c31f9514baab038d75a1db0 not found: ID does not exist" containerID="e1c21b159517c024d2850d533108f097e6b934c21c31f9514baab038d75a1db0" Mar 20 15:44:24 crc kubenswrapper[4730]: I0320 15:44:24.136296 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1c21b159517c024d2850d533108f097e6b934c21c31f9514baab038d75a1db0"} err="failed to get container status \"e1c21b159517c024d2850d533108f097e6b934c21c31f9514baab038d75a1db0\": rpc error: code = NotFound desc = could not find container \"e1c21b159517c024d2850d533108f097e6b934c21c31f9514baab038d75a1db0\": container with ID starting with e1c21b159517c024d2850d533108f097e6b934c21c31f9514baab038d75a1db0 not found: ID does not exist" Mar 20 15:44:24 crc kubenswrapper[4730]: I0320 15:44:24.556568 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:44:24 crc kubenswrapper[4730]: I0320 15:44:24.556614 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:44:24 crc kubenswrapper[4730]: I0320 15:44:24.561365 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:44:26 crc kubenswrapper[4730]: I0320 15:44:26.880537 4730 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:44:27 crc kubenswrapper[4730]: I0320 15:44:27.135415 4730 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5c3fe2f-3c67-4dee-becb-3ecfe2758384" Mar 20 15:44:27 crc kubenswrapper[4730]: I0320 15:44:27.135454 4730 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5c3fe2f-3c67-4dee-becb-3ecfe2758384" Mar 20 15:44:27 crc kubenswrapper[4730]: I0320 15:44:27.139367 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:44:27 crc kubenswrapper[4730]: I0320 15:44:27.195338 4730 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6aab3c27-7254-49fc-a52a-2f6ba7e3fced" Mar 20 15:44:27 crc kubenswrapper[4730]: I0320 15:44:27.291146 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 15:44:27 crc kubenswrapper[4730]: I0320 15:44:27.664388 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 20 15:44:27 crc kubenswrapper[4730]: I0320 15:44:27.664707 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 20 15:44:27 crc kubenswrapper[4730]: I0320 15:44:27.665691 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 20 15:44:27 crc kubenswrapper[4730]: I0320 15:44:27.873481 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 20 15:44:28 crc kubenswrapper[4730]: I0320 15:44:28.138872 4730 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5c3fe2f-3c67-4dee-becb-3ecfe2758384" Mar 20 15:44:28 crc kubenswrapper[4730]: I0320 15:44:28.138910 4730 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5c3fe2f-3c67-4dee-becb-3ecfe2758384" Mar 20 15:44:28 crc kubenswrapper[4730]: I0320 15:44:28.142709 4730 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6aab3c27-7254-49fc-a52a-2f6ba7e3fced" Mar 20 15:44:32 crc kubenswrapper[4730]: I0320 15:44:32.119422 4730 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 20 15:44:32 crc kubenswrapper[4730]: I0320 15:44:32.119964 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 20 15:44:35 crc kubenswrapper[4730]: E0320 15:44:35.545820 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-cqllr], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 15:44:35 crc kubenswrapper[4730]: E0320 15:44:35.552366 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[networking-console-plugin-cert nginx-conf], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 15:44:35 crc kubenswrapper[4730]: E0320 15:44:35.556779 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-s2dwl], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 15:44:36 crc kubenswrapper[4730]: E0320 15:44:36.544700 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-certs], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-multus/network-metrics-daemon-2prfn" podUID="db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a" Mar 20 15:44:36 crc kubenswrapper[4730]: I0320 15:44:36.637678 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 20 15:44:37 crc kubenswrapper[4730]: I0320 15:44:37.360912 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 20 15:44:38 crc kubenswrapper[4730]: I0320 15:44:38.101480 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 20 15:44:38 crc kubenswrapper[4730]: I0320 15:44:38.117630 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 20 15:44:38 crc kubenswrapper[4730]: I0320 15:44:38.166861 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 20 15:44:38 crc kubenswrapper[4730]: I0320 15:44:38.215133 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 20 15:44:38 crc kubenswrapper[4730]: I0320 15:44:38.400597 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 20 15:44:38 crc kubenswrapper[4730]: I0320 15:44:38.829541 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 20 15:44:38 crc kubenswrapper[4730]: I0320 15:44:38.990990 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 20 15:44:39 crc kubenswrapper[4730]: I0320 15:44:39.397612 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 20 15:44:39 crc kubenswrapper[4730]: I0320 15:44:39.414988 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 20 15:44:39 crc kubenswrapper[4730]: I0320 15:44:39.455041 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 20 15:44:39 crc kubenswrapper[4730]: I0320 15:44:39.628732 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 20 15:44:39 crc kubenswrapper[4730]: I0320 15:44:39.705231 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 20 15:44:39 crc kubenswrapper[4730]: I0320 15:44:39.986078 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 20 15:44:40 crc kubenswrapper[4730]: I0320 15:44:40.057593 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 20 15:44:40 crc kubenswrapper[4730]: I0320 15:44:40.096368 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 20 15:44:40 crc kubenswrapper[4730]: I0320 15:44:40.152401 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 20 15:44:40 crc kubenswrapper[4730]: I0320 15:44:40.180605 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 20 15:44:40 crc kubenswrapper[4730]: I0320 15:44:40.225745 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 20 15:44:40 crc kubenswrapper[4730]: I0320 15:44:40.315893 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 20 15:44:40 crc kubenswrapper[4730]: I0320 15:44:40.397795 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 20 15:44:40 crc kubenswrapper[4730]: I0320 15:44:40.440590 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 20 15:44:40 crc kubenswrapper[4730]: I0320 15:44:40.561562 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 20 15:44:40 crc kubenswrapper[4730]: I0320 15:44:40.565798 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 20 15:44:40 crc kubenswrapper[4730]: I0320 15:44:40.694649 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 20 15:44:40 crc kubenswrapper[4730]: I0320 15:44:40.705602 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 20 15:44:40 crc kubenswrapper[4730]: I0320 15:44:40.740554 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 20 15:44:41 crc kubenswrapper[4730]: I0320 15:44:41.042636 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 20 15:44:41 crc kubenswrapper[4730]: I0320 15:44:41.105289 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 20 15:44:41 crc kubenswrapper[4730]: I0320 15:44:41.205399 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 20 15:44:41 crc kubenswrapper[4730]: I0320 15:44:41.213597 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 20 15:44:41 crc kubenswrapper[4730]: I0320 15:44:41.439279 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 20 15:44:41 crc kubenswrapper[4730]: I0320 15:44:41.486730 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 20 15:44:41 crc kubenswrapper[4730]: I0320 15:44:41.524984 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 20 15:44:41 crc kubenswrapper[4730]: I0320 15:44:41.550475 4730 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 20 15:44:41 crc kubenswrapper[4730]: I0320 15:44:41.561834 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 20 15:44:41 crc kubenswrapper[4730]: I0320 15:44:41.588175 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 20 15:44:41 crc kubenswrapper[4730]: I0320 15:44:41.756848 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 20 15:44:41 crc kubenswrapper[4730]: I0320 15:44:41.776078 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 20 15:44:41 crc kubenswrapper[4730]: I0320 15:44:41.796389 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 20 15:44:41 crc kubenswrapper[4730]: I0320 15:44:41.835620 4730 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 20 15:44:41 crc kubenswrapper[4730]: I0320 15:44:41.867582 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 20 15:44:41 crc kubenswrapper[4730]: I0320 15:44:41.885174 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 20 15:44:42 crc kubenswrapper[4730]: I0320 15:44:42.120127 4730 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 20 15:44:42 crc kubenswrapper[4730]: I0320 15:44:42.120183 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 20 15:44:42 crc kubenswrapper[4730]: I0320 15:44:42.120299 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 15:44:42 crc kubenswrapper[4730]: I0320 15:44:42.120962 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"e1f2bc557068d23d31fb94ba0b8755d440a6066e6a0d9e74613e4436d64e826e"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 20 15:44:42 crc kubenswrapper[4730]: I0320 15:44:42.121102 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://e1f2bc557068d23d31fb94ba0b8755d440a6066e6a0d9e74613e4436d64e826e" gracePeriod=30 Mar 20 15:44:42 crc kubenswrapper[4730]: I0320 15:44:42.135369 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 20 15:44:42 crc kubenswrapper[4730]: I0320 15:44:42.387169 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 20 15:44:42 crc kubenswrapper[4730]: I0320 15:44:42.424935 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 20 15:44:42 crc kubenswrapper[4730]: I0320 15:44:42.469226 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 20 15:44:42 crc kubenswrapper[4730]: I0320 15:44:42.561940 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 20 15:44:42 crc kubenswrapper[4730]: I0320 15:44:42.615932 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 20 15:44:42 crc kubenswrapper[4730]: I0320 15:44:42.823811 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 20 15:44:42 crc kubenswrapper[4730]: I0320 15:44:42.852747 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 20 15:44:42 crc kubenswrapper[4730]: I0320 15:44:42.853982 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 20 15:44:42 crc kubenswrapper[4730]: I0320 15:44:42.882629 4730 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 20 15:44:42 crc kubenswrapper[4730]: I0320 15:44:42.885202 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=36.88518357 podStartE2EDuration="36.88518357s" podCreationTimestamp="2026-03-20 15:44:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:44:26.919754059 +0000 UTC m=+326.133125448" watchObservedRunningTime="2026-03-20 15:44:42.88518357 +0000 UTC m=+342.098554939" Mar 20 15:44:42 crc kubenswrapper[4730]: I0320 15:44:42.886773 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 20 15:44:42 crc kubenswrapper[4730]: I0320 15:44:42.887756 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6f56868448-2fbxh","openshift-authentication/oauth-openshift-558db77b4-st79s","openshift-route-controller-manager/route-controller-manager-b75b5f765-8wjw2","openshift-marketplace/redhat-marketplace-2z2hv","openshift-kube-apiserver/kube-apiserver-crc"] Mar 20 15:44:42 crc kubenswrapper[4730]: I0320 15:44:42.887820 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 20 15:44:42 crc kubenswrapper[4730]: I0320 15:44:42.891605 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 15:44:42 crc kubenswrapper[4730]: I0320 15:44:42.906583 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.906566161 podStartE2EDuration="16.906566161s" podCreationTimestamp="2026-03-20 15:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:44:42.905589071 +0000 UTC m=+342.118960460" watchObservedRunningTime="2026-03-20 15:44:42.906566161 +0000 UTC m=+342.119937530" Mar 20 15:44:42 crc kubenswrapper[4730]: I0320 15:44:42.919813 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 20 15:44:42 crc kubenswrapper[4730]: I0320 15:44:42.921006 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 20 15:44:42 crc kubenswrapper[4730]: I0320 15:44:42.982551 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 20 15:44:43 crc kubenswrapper[4730]: I0320 15:44:43.192656 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 20 15:44:43 crc kubenswrapper[4730]: I0320 15:44:43.297345 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 20 15:44:43 crc kubenswrapper[4730]: I0320 15:44:43.384649 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 20 15:44:43 crc kubenswrapper[4730]: I0320 15:44:43.389223 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 20 15:44:43 crc kubenswrapper[4730]: I0320 15:44:43.399573 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 20 15:44:43 crc kubenswrapper[4730]: I0320 15:44:43.510425 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 20 15:44:43 crc kubenswrapper[4730]: I0320 15:44:43.540644 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2499559b-b31f-4dab-89a0-964964dc596e" path="/var/lib/kubelet/pods/2499559b-b31f-4dab-89a0-964964dc596e/volumes" Mar 20 15:44:43 crc kubenswrapper[4730]: I0320 15:44:43.541559 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" path="/var/lib/kubelet/pods/715cbff8-9674-4896-8deb-54a6e9a8899e/volumes" Mar 20 15:44:43 crc kubenswrapper[4730]: I0320 15:44:43.542380 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d747680-5dde-4793-863a-252a5f67233a" path="/var/lib/kubelet/pods/9d747680-5dde-4793-863a-252a5f67233a/volumes" Mar 20 15:44:43 crc kubenswrapper[4730]: I0320 15:44:43.543472 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" path="/var/lib/kubelet/pods/d4d20fab-86cc-44d8-a8b9-c60f6835c5e0/volumes" Mar 20 15:44:43 crc kubenswrapper[4730]: I0320 15:44:43.582029 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 20 15:44:43 crc kubenswrapper[4730]: I0320 15:44:43.732802 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 20 15:44:43 crc kubenswrapper[4730]: I0320 15:44:43.866780 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 20 15:44:43 crc kubenswrapper[4730]: I0320 15:44:43.870318 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 20 15:44:43 crc kubenswrapper[4730]: I0320 15:44:43.915373 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 20 15:44:43 crc kubenswrapper[4730]: I0320 15:44:43.919759 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 20 15:44:43 crc kubenswrapper[4730]: I0320 15:44:43.977866 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 20 15:44:44 crc kubenswrapper[4730]: I0320 15:44:44.011594 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 20 15:44:44 crc kubenswrapper[4730]: I0320 15:44:44.033195 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 20 15:44:44 crc kubenswrapper[4730]: I0320 15:44:44.087545 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 20 15:44:44 crc kubenswrapper[4730]: I0320 15:44:44.094533 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 20 15:44:44 crc kubenswrapper[4730]: I0320 15:44:44.132578 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 20 15:44:44 crc kubenswrapper[4730]: I0320 15:44:44.158095 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 20 15:44:44 crc kubenswrapper[4730]: I0320 15:44:44.238938 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 20 15:44:44 crc kubenswrapper[4730]: I0320 15:44:44.605583 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 20 15:44:44 crc kubenswrapper[4730]: I0320 15:44:44.702014 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 20 15:44:44 crc kubenswrapper[4730]: I0320 15:44:44.709659 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 20 15:44:44 crc kubenswrapper[4730]: I0320 15:44:44.749455 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 20 15:44:44 crc kubenswrapper[4730]: I0320 15:44:44.755065 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 20 15:44:44 crc kubenswrapper[4730]: I0320 15:44:44.871600 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 20 15:44:44 crc kubenswrapper[4730]: I0320 15:44:44.906137 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 20 15:44:44 crc kubenswrapper[4730]: I0320 15:44:44.930061 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 20 15:44:44 crc kubenswrapper[4730]: I0320 15:44:44.982355 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 20 15:44:44 crc kubenswrapper[4730]: I0320 15:44:44.988277 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 20 15:44:45 crc kubenswrapper[4730]: I0320 15:44:45.118701 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 20 15:44:45 crc kubenswrapper[4730]: I0320 15:44:45.152927 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 20 15:44:45 crc kubenswrapper[4730]: I0320 15:44:45.157144 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 20 15:44:45 crc kubenswrapper[4730]: I0320 15:44:45.176493 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 20 15:44:45 crc kubenswrapper[4730]: I0320 15:44:45.278322 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 20 15:44:45 crc kubenswrapper[4730]: I0320 15:44:45.409298 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 20 15:44:45 crc kubenswrapper[4730]: I0320 15:44:45.462347 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 20 15:44:45 crc kubenswrapper[4730]: I0320 15:44:45.570022 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 20 15:44:45 crc kubenswrapper[4730]: I0320 15:44:45.613608 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 20 15:44:45 crc kubenswrapper[4730]: I0320 15:44:45.681138 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 20 15:44:45 crc kubenswrapper[4730]: I0320 15:44:45.862464 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 20 15:44:45 crc kubenswrapper[4730]: I0320 15:44:45.909808 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 20 15:44:45 crc kubenswrapper[4730]: I0320 15:44:45.952695 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 20 15:44:45 crc kubenswrapper[4730]: I0320 15:44:45.954844 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 20 15:44:45 crc kubenswrapper[4730]: I0320 15:44:45.988716 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 20 15:44:46 crc kubenswrapper[4730]: I0320 15:44:46.034933 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 20 15:44:46 crc kubenswrapper[4730]: I0320 15:44:46.153428 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 20 15:44:46 crc kubenswrapper[4730]: I0320 15:44:46.168107 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 20 15:44:46 crc kubenswrapper[4730]: I0320 15:44:46.243841 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 20 15:44:46 crc kubenswrapper[4730]: I0320 15:44:46.532047 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:44:46 crc kubenswrapper[4730]: I0320 15:44:46.532138 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:44:46 crc kubenswrapper[4730]: I0320 15:44:46.532047 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:44:46 crc kubenswrapper[4730]: I0320 15:44:46.567364 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 20 15:44:46 crc kubenswrapper[4730]: I0320 15:44:46.642955 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 20 15:44:46 crc kubenswrapper[4730]: I0320 15:44:46.653383 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 20 15:44:46 crc kubenswrapper[4730]: I0320 15:44:46.671155 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 20 15:44:46 crc kubenswrapper[4730]: I0320 15:44:46.737796 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 20 15:44:46 crc kubenswrapper[4730]: I0320 15:44:46.932995 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 20 15:44:46 crc kubenswrapper[4730]: I0320 15:44:46.940596 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.129848 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.164907 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.228180 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.231153 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.295426 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.376641 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.392157 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.408712 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.438713 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt"] Mar 20 15:44:47 crc kubenswrapper[4730]: E0320 15:44:47.439175 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e48519c7-0cdc-419b-bd72-2bab0e911af8" containerName="installer" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.439190 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e48519c7-0cdc-419b-bd72-2bab0e911af8" containerName="installer" Mar 20 15:44:47 crc kubenswrapper[4730]: E0320 15:44:47.439198 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" containerName="registry-server" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.439205 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" containerName="registry-server" Mar 20 15:44:47 crc kubenswrapper[4730]: E0320 15:44:47.439214 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f093381-3bf4-49ff-beb4-f44aa012c521" containerName="oc" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.439222 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f093381-3bf4-49ff-beb4-f44aa012c521" containerName="oc" Mar 20 15:44:47 crc kubenswrapper[4730]: E0320 15:44:47.439230 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d747680-5dde-4793-863a-252a5f67233a" containerName="controller-manager" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.439237 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d747680-5dde-4793-863a-252a5f67233a" containerName="controller-manager" Mar 20 15:44:47 crc kubenswrapper[4730]: E0320 15:44:47.439263 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" containerName="extract-utilities" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.439272 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" containerName="extract-utilities" Mar 20 15:44:47 crc kubenswrapper[4730]: E0320 15:44:47.439281 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" containerName="route-controller-manager" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.439288 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" containerName="route-controller-manager" Mar 20 15:44:47 crc kubenswrapper[4730]: E0320 15:44:47.439296 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" containerName="extract-content" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.439301 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" containerName="extract-content" Mar 20 15:44:47 crc kubenswrapper[4730]: E0320 15:44:47.439314 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2499559b-b31f-4dab-89a0-964964dc596e" containerName="oauth-openshift" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.439320 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="2499559b-b31f-4dab-89a0-964964dc596e" containerName="oauth-openshift" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.439415 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d747680-5dde-4793-863a-252a5f67233a" containerName="controller-manager" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.439426 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="715cbff8-9674-4896-8deb-54a6e9a8899e" containerName="registry-server" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.439434 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="e48519c7-0cdc-419b-bd72-2bab0e911af8" containerName="installer" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.439441 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f093381-3bf4-49ff-beb4-f44aa012c521" containerName="oc" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.439453 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="2499559b-b31f-4dab-89a0-964964dc596e" containerName="oauth-openshift" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.439461 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4d20fab-86cc-44d8-a8b9-c60f6835c5e0" containerName="route-controller-manager" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.439896 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.443727 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8"] Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.444569 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.446177 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.446203 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj"] Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.446831 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.446213 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.446349 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.446396 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.446412 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.446502 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.451475 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.451647 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.451820 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.452074 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.452116 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.452278 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.452309 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.452712 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.453061 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.453185 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.453724 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.454382 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.454557 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.455585 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.455752 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.455927 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt"] Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.459836 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.459875 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.459946 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.462418 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.464471 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.465309 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj"] Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.465782 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.470179 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8"] Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.476949 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.481047 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.486144 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.510397 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpzmq\" (UniqueName: \"kubernetes.io/projected/ef9a1400-2cee-4019-b907-440c025638b6-kube-api-access-hpzmq\") pod \"route-controller-manager-754f74fd94-6mnkj\" (UID: \"ef9a1400-2cee-4019-b907-440c025638b6\") " pod="openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.510476 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef9a1400-2cee-4019-b907-440c025638b6-config\") pod \"route-controller-manager-754f74fd94-6mnkj\" (UID: \"ef9a1400-2cee-4019-b907-440c025638b6\") " pod="openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.510549 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef9a1400-2cee-4019-b907-440c025638b6-serving-cert\") pod \"route-controller-manager-754f74fd94-6mnkj\" (UID: \"ef9a1400-2cee-4019-b907-440c025638b6\") " pod="openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.510616 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef9a1400-2cee-4019-b907-440c025638b6-client-ca\") pod \"route-controller-manager-754f74fd94-6mnkj\" (UID: \"ef9a1400-2cee-4019-b907-440c025638b6\") " pod="openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.566948 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.611801 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-system-session\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.611870 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d4c66130-d966-4244-abec-e2aefba87726-audit-dir\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.611907 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.611934 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.611960 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.612007 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzmq\" (UniqueName: \"kubernetes.io/projected/ef9a1400-2cee-4019-b907-440c025638b6-kube-api-access-hpzmq\") pod \"route-controller-manager-754f74fd94-6mnkj\" (UID: \"ef9a1400-2cee-4019-b907-440c025638b6\") " pod="openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.612037 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.612064 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d180ebe8-2390-4a66-a6e0-1d02e256279a-config\") pod \"controller-manager-86dfd77dcd-rv2mt\" (UID: \"d180ebe8-2390-4a66-a6e0-1d02e256279a\") " pod="openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.612086 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjzm9\" (UniqueName: \"kubernetes.io/projected/d180ebe8-2390-4a66-a6e0-1d02e256279a-kube-api-access-sjzm9\") pod \"controller-manager-86dfd77dcd-rv2mt\" (UID: \"d180ebe8-2390-4a66-a6e0-1d02e256279a\") " pod="openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.612124 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef9a1400-2cee-4019-b907-440c025638b6-config\") pod \"route-controller-manager-754f74fd94-6mnkj\" (UID: \"ef9a1400-2cee-4019-b907-440c025638b6\") " pod="openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.612150 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.612190 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d180ebe8-2390-4a66-a6e0-1d02e256279a-serving-cert\") pod \"controller-manager-86dfd77dcd-rv2mt\" (UID: \"d180ebe8-2390-4a66-a6e0-1d02e256279a\") " pod="openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.612297 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-system-router-certs\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.612455 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d4c66130-d966-4244-abec-e2aefba87726-audit-policies\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.612514 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef9a1400-2cee-4019-b907-440c025638b6-serving-cert\") pod \"route-controller-manager-754f74fd94-6mnkj\" (UID: \"ef9a1400-2cee-4019-b907-440c025638b6\") " pod="openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.612589 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-user-template-error\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.612647 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d180ebe8-2390-4a66-a6e0-1d02e256279a-proxy-ca-bundles\") pod \"controller-manager-86dfd77dcd-rv2mt\" (UID: \"d180ebe8-2390-4a66-a6e0-1d02e256279a\") " pod="openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.612681 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.612725 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d180ebe8-2390-4a66-a6e0-1d02e256279a-client-ca\") pod \"controller-manager-86dfd77dcd-rv2mt\" (UID: \"d180ebe8-2390-4a66-a6e0-1d02e256279a\") " pod="openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.612779 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-user-template-login\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.612830 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-system-service-ca\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.612863 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vg6x\" (UniqueName: \"kubernetes.io/projected/d4c66130-d966-4244-abec-e2aefba87726-kube-api-access-4vg6x\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.612910 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef9a1400-2cee-4019-b907-440c025638b6-client-ca\") pod \"route-controller-manager-754f74fd94-6mnkj\" (UID: \"ef9a1400-2cee-4019-b907-440c025638b6\") " pod="openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.613457 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef9a1400-2cee-4019-b907-440c025638b6-config\") pod \"route-controller-manager-754f74fd94-6mnkj\" (UID: \"ef9a1400-2cee-4019-b907-440c025638b6\") " pod="openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.614341 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef9a1400-2cee-4019-b907-440c025638b6-client-ca\") pod \"route-controller-manager-754f74fd94-6mnkj\" (UID: \"ef9a1400-2cee-4019-b907-440c025638b6\") " pod="openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.622056 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.631469 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpzmq\" (UniqueName: \"kubernetes.io/projected/ef9a1400-2cee-4019-b907-440c025638b6-kube-api-access-hpzmq\") pod \"route-controller-manager-754f74fd94-6mnkj\" (UID: \"ef9a1400-2cee-4019-b907-440c025638b6\") " pod="openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.634320 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef9a1400-2cee-4019-b907-440c025638b6-serving-cert\") pod \"route-controller-manager-754f74fd94-6mnkj\" (UID: \"ef9a1400-2cee-4019-b907-440c025638b6\") " pod="openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.671042 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.713998 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-user-template-login\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.714047 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vg6x\" (UniqueName: \"kubernetes.io/projected/d4c66130-d966-4244-abec-e2aefba87726-kube-api-access-4vg6x\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.714067 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-system-service-ca\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.714087 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-system-session\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.714111 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d4c66130-d966-4244-abec-e2aefba87726-audit-dir\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.714138 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.714158 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.714259 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d4c66130-d966-4244-abec-e2aefba87726-audit-dir\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.714309 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.714334 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.714552 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d180ebe8-2390-4a66-a6e0-1d02e256279a-config\") pod \"controller-manager-86dfd77dcd-rv2mt\" (UID: \"d180ebe8-2390-4a66-a6e0-1d02e256279a\") " pod="openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.714571 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjzm9\" (UniqueName: \"kubernetes.io/projected/d180ebe8-2390-4a66-a6e0-1d02e256279a-kube-api-access-sjzm9\") pod \"controller-manager-86dfd77dcd-rv2mt\" (UID: \"d180ebe8-2390-4a66-a6e0-1d02e256279a\") " pod="openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.714605 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.714634 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d180ebe8-2390-4a66-a6e0-1d02e256279a-serving-cert\") pod \"controller-manager-86dfd77dcd-rv2mt\" (UID: \"d180ebe8-2390-4a66-a6e0-1d02e256279a\") " pod="openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.714653 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-system-router-certs\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.714691 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d4c66130-d966-4244-abec-e2aefba87726-audit-policies\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.714724 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-user-template-error\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.714745 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d180ebe8-2390-4a66-a6e0-1d02e256279a-proxy-ca-bundles\") pod \"controller-manager-86dfd77dcd-rv2mt\" (UID: \"d180ebe8-2390-4a66-a6e0-1d02e256279a\") " pod="openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.714762 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.714783 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d180ebe8-2390-4a66-a6e0-1d02e256279a-client-ca\") pod \"controller-manager-86dfd77dcd-rv2mt\" (UID: \"d180ebe8-2390-4a66-a6e0-1d02e256279a\") " pod="openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.715800 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-system-service-ca\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.716103 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d4c66130-d966-4244-abec-e2aefba87726-audit-policies\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.716285 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d180ebe8-2390-4a66-a6e0-1d02e256279a-config\") pod \"controller-manager-86dfd77dcd-rv2mt\" (UID: \"d180ebe8-2390-4a66-a6e0-1d02e256279a\") " pod="openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.716931 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-user-template-login\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.717306 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d180ebe8-2390-4a66-a6e0-1d02e256279a-proxy-ca-bundles\") pod \"controller-manager-86dfd77dcd-rv2mt\" (UID: \"d180ebe8-2390-4a66-a6e0-1d02e256279a\") " pod="openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.717401 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.717870 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.718013 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.718041 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.718340 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d180ebe8-2390-4a66-a6e0-1d02e256279a-client-ca\") pod \"controller-manager-86dfd77dcd-rv2mt\" (UID: \"d180ebe8-2390-4a66-a6e0-1d02e256279a\") " pod="openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.719714 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.728690 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-system-session\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.728817 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-system-router-certs\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.728931 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-user-template-error\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.729156 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d180ebe8-2390-4a66-a6e0-1d02e256279a-serving-cert\") pod \"controller-manager-86dfd77dcd-rv2mt\" (UID: \"d180ebe8-2390-4a66-a6e0-1d02e256279a\") " pod="openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.730046 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d4c66130-d966-4244-abec-e2aefba87726-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.731323 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjzm9\" (UniqueName: \"kubernetes.io/projected/d180ebe8-2390-4a66-a6e0-1d02e256279a-kube-api-access-sjzm9\") pod \"controller-manager-86dfd77dcd-rv2mt\" (UID: \"d180ebe8-2390-4a66-a6e0-1d02e256279a\") " pod="openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.735726 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vg6x\" (UniqueName: \"kubernetes.io/projected/d4c66130-d966-4244-abec-e2aefba87726-kube-api-access-4vg6x\") pod \"oauth-openshift-6768bc9c9c-k6zb8\" (UID: \"d4c66130-d966-4244-abec-e2aefba87726\") " pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.765655 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.775350 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.785668 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.803105 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.813958 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.826635 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.976153 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.979779 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 20 15:44:47 crc kubenswrapper[4730]: I0320 15:44:47.982727 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 20 15:44:48 crc kubenswrapper[4730]: I0320 15:44:48.001447 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 20 15:44:48 crc kubenswrapper[4730]: I0320 15:44:48.215561 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 20 15:44:48 crc kubenswrapper[4730]: I0320 15:44:48.225280 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt"] Mar 20 15:44:48 crc kubenswrapper[4730]: I0320 15:44:48.238193 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt" event={"ID":"d180ebe8-2390-4a66-a6e0-1d02e256279a","Type":"ContainerStarted","Data":"1f20c0c6cb3f095a9ebf090bce89d3cdbdc4242e46cb281771f335b8d4272464"} Mar 20 15:44:48 crc kubenswrapper[4730]: I0320 15:44:48.266848 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8"] Mar 20 15:44:48 crc kubenswrapper[4730]: W0320 15:44:48.266964 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4c66130_d966_4244_abec_e2aefba87726.slice/crio-7789633566c7ef452d701f6956579ab64153ee92f2c67df787e8cfb7fba13d91 WatchSource:0}: Error finding container 7789633566c7ef452d701f6956579ab64153ee92f2c67df787e8cfb7fba13d91: Status 404 returned error can't find the container with id 7789633566c7ef452d701f6956579ab64153ee92f2c67df787e8cfb7fba13d91 Mar 20 15:44:48 crc kubenswrapper[4730]: I0320 15:44:48.292682 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 20 15:44:48 crc kubenswrapper[4730]: I0320 15:44:48.342394 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 20 15:44:48 crc kubenswrapper[4730]: I0320 15:44:48.343441 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 20 15:44:48 crc kubenswrapper[4730]: I0320 15:44:48.354512 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj"] Mar 20 15:44:48 crc kubenswrapper[4730]: W0320 15:44:48.359416 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef9a1400_2cee_4019_b907_440c025638b6.slice/crio-dc3dc13f49503d3d06bcb012aeab946187b3c373989eaa5abd5e06eb7c51d310 WatchSource:0}: Error finding container dc3dc13f49503d3d06bcb012aeab946187b3c373989eaa5abd5e06eb7c51d310: Status 404 returned error can't find the container with id dc3dc13f49503d3d06bcb012aeab946187b3c373989eaa5abd5e06eb7c51d310 Mar 20 15:44:48 crc kubenswrapper[4730]: I0320 15:44:48.364862 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 20 15:44:48 crc kubenswrapper[4730]: I0320 15:44:48.416172 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 20 15:44:48 crc kubenswrapper[4730]: I0320 15:44:48.463783 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 20 15:44:48 crc kubenswrapper[4730]: I0320 15:44:48.520574 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 20 15:44:48 crc kubenswrapper[4730]: I0320 15:44:48.541866 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 20 15:44:48 crc kubenswrapper[4730]: I0320 15:44:48.567155 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 20 15:44:48 crc kubenswrapper[4730]: I0320 15:44:48.594951 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 20 15:44:48 crc kubenswrapper[4730]: I0320 15:44:48.871762 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 20 15:44:48 crc kubenswrapper[4730]: I0320 15:44:48.875511 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 20 15:44:48 crc kubenswrapper[4730]: I0320 15:44:48.899320 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 20 15:44:48 crc kubenswrapper[4730]: I0320 15:44:48.939859 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 20 15:44:48 crc kubenswrapper[4730]: I0320 15:44:48.948757 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.013993 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.062714 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.094339 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.173018 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.195336 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.246390 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt" event={"ID":"d180ebe8-2390-4a66-a6e0-1d02e256279a","Type":"ContainerStarted","Data":"79c37e5d53d635bd788a7db0e9cd6a218a8e4fb2962907f08447c07acfcacfe6"} Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.246433 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.249294 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj" event={"ID":"ef9a1400-2cee-4019-b907-440c025638b6","Type":"ContainerStarted","Data":"3ebe650c86674a5b830721481ad113ef2a916f54a0020fdc9ad70e3a263d5aa6"} Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.249340 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj" event={"ID":"ef9a1400-2cee-4019-b907-440c025638b6","Type":"ContainerStarted","Data":"dc3dc13f49503d3d06bcb012aeab946187b3c373989eaa5abd5e06eb7c51d310"} Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.249540 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.251467 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" event={"ID":"d4c66130-d966-4244-abec-e2aefba87726","Type":"ContainerStarted","Data":"c6a09d9e8075c096b4adbd19be4b5a53f4a961a045a1826d210756002335688e"} Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.251503 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" event={"ID":"d4c66130-d966-4244-abec-e2aefba87726","Type":"ContainerStarted","Data":"7789633566c7ef452d701f6956579ab64153ee92f2c67df787e8cfb7fba13d91"} Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.251883 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.253323 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.253942 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.257527 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.266997 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.271071 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.273798 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.285897 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt" podStartSLOduration=45.28588033 podStartE2EDuration="45.28588033s" podCreationTimestamp="2026-03-20 15:44:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:44:49.265435457 +0000 UTC m=+348.478806826" watchObservedRunningTime="2026-03-20 15:44:49.28588033 +0000 UTC m=+348.499251709" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.287591 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6768bc9c9c-k6zb8" podStartSLOduration=51.287580811 podStartE2EDuration="51.287580811s" podCreationTimestamp="2026-03-20 15:43:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:44:49.283682103 +0000 UTC m=+348.497053492" watchObservedRunningTime="2026-03-20 15:44:49.287580811 +0000 UTC m=+348.500952180" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.303624 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.327573 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj" podStartSLOduration=45.327553739 podStartE2EDuration="45.327553739s" podCreationTimestamp="2026-03-20 15:44:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:44:49.323955129 +0000 UTC m=+348.537326498" watchObservedRunningTime="2026-03-20 15:44:49.327553739 +0000 UTC m=+348.540925108" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.444331 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.449818 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.481682 4730 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.512284 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.545027 4730 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.549221 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.568518 4730 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.568753 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://2bd02e0e36575d63682737a4f6e6a51da85e3c01b703e67cc6238582df76514f" gracePeriod=5 Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.576921 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.602169 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.695217 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.713889 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.778229 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.880099 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 20 15:44:49 crc kubenswrapper[4730]: I0320 15:44:49.930895 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 20 15:44:50 crc kubenswrapper[4730]: I0320 15:44:50.155481 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 20 15:44:50 crc kubenswrapper[4730]: I0320 15:44:50.225091 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 20 15:44:50 crc kubenswrapper[4730]: I0320 15:44:50.346829 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 20 15:44:50 crc kubenswrapper[4730]: I0320 15:44:50.426172 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 20 15:44:50 crc kubenswrapper[4730]: I0320 15:44:50.517154 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 20 15:44:50 crc kubenswrapper[4730]: I0320 15:44:50.532624 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:44:50 crc kubenswrapper[4730]: I0320 15:44:50.540371 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 20 15:44:50 crc kubenswrapper[4730]: I0320 15:44:50.633879 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 20 15:44:50 crc kubenswrapper[4730]: I0320 15:44:50.683169 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 20 15:44:50 crc kubenswrapper[4730]: I0320 15:44:50.706899 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 20 15:44:50 crc kubenswrapper[4730]: I0320 15:44:50.753911 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 20 15:44:50 crc kubenswrapper[4730]: I0320 15:44:50.817891 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 20 15:44:51 crc kubenswrapper[4730]: I0320 15:44:51.005981 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 20 15:44:51 crc kubenswrapper[4730]: I0320 15:44:51.150387 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 20 15:44:51 crc kubenswrapper[4730]: I0320 15:44:51.159953 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 20 15:44:51 crc kubenswrapper[4730]: I0320 15:44:51.219079 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 20 15:44:51 crc kubenswrapper[4730]: I0320 15:44:51.486771 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 20 15:44:51 crc kubenswrapper[4730]: I0320 15:44:51.697226 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 20 15:44:51 crc kubenswrapper[4730]: I0320 15:44:51.735019 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 20 15:44:51 crc kubenswrapper[4730]: I0320 15:44:51.841454 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 20 15:44:51 crc kubenswrapper[4730]: I0320 15:44:51.947589 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 20 15:44:51 crc kubenswrapper[4730]: I0320 15:44:51.992048 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 20 15:44:51 crc kubenswrapper[4730]: I0320 15:44:51.992980 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 20 15:44:52 crc kubenswrapper[4730]: I0320 15:44:52.056100 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 20 15:44:52 crc kubenswrapper[4730]: I0320 15:44:52.066202 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 20 15:44:52 crc kubenswrapper[4730]: I0320 15:44:52.085935 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 20 15:44:52 crc kubenswrapper[4730]: I0320 15:44:52.087375 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 20 15:44:52 crc kubenswrapper[4730]: I0320 15:44:52.227355 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 20 15:44:52 crc kubenswrapper[4730]: I0320 15:44:52.502476 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 20 15:44:52 crc kubenswrapper[4730]: I0320 15:44:52.600682 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 20 15:44:52 crc kubenswrapper[4730]: I0320 15:44:52.659318 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 20 15:44:52 crc kubenswrapper[4730]: I0320 15:44:52.666340 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 20 15:44:52 crc kubenswrapper[4730]: I0320 15:44:52.961164 4730 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 20 15:44:53 crc kubenswrapper[4730]: I0320 15:44:53.007965 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 20 15:44:53 crc kubenswrapper[4730]: I0320 15:44:53.100363 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 20 15:44:53 crc kubenswrapper[4730]: I0320 15:44:53.138425 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 20 15:44:53 crc kubenswrapper[4730]: I0320 15:44:53.468657 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 20 15:44:53 crc kubenswrapper[4730]: I0320 15:44:53.627019 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 20 15:44:53 crc kubenswrapper[4730]: I0320 15:44:53.881044 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 20 15:44:53 crc kubenswrapper[4730]: I0320 15:44:53.950040 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 20 15:44:54 crc kubenswrapper[4730]: I0320 15:44:54.039569 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 20 15:44:54 crc kubenswrapper[4730]: I0320 15:44:54.270387 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 20 15:44:54 crc kubenswrapper[4730]: I0320 15:44:54.295597 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 20 15:44:54 crc kubenswrapper[4730]: I0320 15:44:54.569581 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 20 15:44:54 crc kubenswrapper[4730]: I0320 15:44:54.672006 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 20 15:44:54 crc kubenswrapper[4730]: I0320 15:44:54.672444 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 15:44:54 crc kubenswrapper[4730]: I0320 15:44:54.738963 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 20 15:44:54 crc kubenswrapper[4730]: I0320 15:44:54.746967 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 20 15:44:54 crc kubenswrapper[4730]: I0320 15:44:54.747042 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 20 15:44:54 crc kubenswrapper[4730]: I0320 15:44:54.747107 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 20 15:44:54 crc kubenswrapper[4730]: I0320 15:44:54.747150 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 20 15:44:54 crc kubenswrapper[4730]: I0320 15:44:54.747196 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 20 15:44:54 crc kubenswrapper[4730]: I0320 15:44:54.747144 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:44:54 crc kubenswrapper[4730]: I0320 15:44:54.747226 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:44:54 crc kubenswrapper[4730]: I0320 15:44:54.747174 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:44:54 crc kubenswrapper[4730]: I0320 15:44:54.747461 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:44:54 crc kubenswrapper[4730]: I0320 15:44:54.747571 4730 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:54 crc kubenswrapper[4730]: I0320 15:44:54.747584 4730 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:54 crc kubenswrapper[4730]: I0320 15:44:54.747593 4730 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:54 crc kubenswrapper[4730]: I0320 15:44:54.747603 4730 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:54 crc kubenswrapper[4730]: I0320 15:44:54.755407 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:44:54 crc kubenswrapper[4730]: I0320 15:44:54.848069 4730 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 20 15:44:54 crc kubenswrapper[4730]: I0320 15:44:54.872179 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 20 15:44:55 crc kubenswrapper[4730]: I0320 15:44:55.285948 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 20 15:44:55 crc kubenswrapper[4730]: I0320 15:44:55.286019 4730 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="2bd02e0e36575d63682737a4f6e6a51da85e3c01b703e67cc6238582df76514f" exitCode=137 Mar 20 15:44:55 crc kubenswrapper[4730]: I0320 15:44:55.286070 4730 scope.go:117] "RemoveContainer" containerID="2bd02e0e36575d63682737a4f6e6a51da85e3c01b703e67cc6238582df76514f" Mar 20 15:44:55 crc kubenswrapper[4730]: I0320 15:44:55.286208 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 15:44:55 crc kubenswrapper[4730]: I0320 15:44:55.304233 4730 scope.go:117] "RemoveContainer" containerID="2bd02e0e36575d63682737a4f6e6a51da85e3c01b703e67cc6238582df76514f" Mar 20 15:44:55 crc kubenswrapper[4730]: E0320 15:44:55.304704 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bd02e0e36575d63682737a4f6e6a51da85e3c01b703e67cc6238582df76514f\": container with ID starting with 2bd02e0e36575d63682737a4f6e6a51da85e3c01b703e67cc6238582df76514f not found: ID does not exist" containerID="2bd02e0e36575d63682737a4f6e6a51da85e3c01b703e67cc6238582df76514f" Mar 20 15:44:55 crc kubenswrapper[4730]: I0320 15:44:55.304812 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bd02e0e36575d63682737a4f6e6a51da85e3c01b703e67cc6238582df76514f"} err="failed to get container status \"2bd02e0e36575d63682737a4f6e6a51da85e3c01b703e67cc6238582df76514f\": rpc error: code = NotFound desc = could not find container \"2bd02e0e36575d63682737a4f6e6a51da85e3c01b703e67cc6238582df76514f\": container with ID starting with 2bd02e0e36575d63682737a4f6e6a51da85e3c01b703e67cc6238582df76514f not found: ID does not exist" Mar 20 15:44:55 crc kubenswrapper[4730]: I0320 15:44:55.540916 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Mar 20 15:44:55 crc kubenswrapper[4730]: I0320 15:44:55.541683 4730 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Mar 20 15:44:55 crc kubenswrapper[4730]: I0320 15:44:55.552853 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 20 15:44:55 crc kubenswrapper[4730]: I0320 15:44:55.553109 4730 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="e4577fc5-223a-423f-9bc6-c292e6be96b9" Mar 20 15:44:55 crc kubenswrapper[4730]: I0320 15:44:55.557340 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 20 15:44:55 crc kubenswrapper[4730]: I0320 15:44:55.557388 4730 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="e4577fc5-223a-423f-9bc6-c292e6be96b9" Mar 20 15:44:56 crc kubenswrapper[4730]: I0320 15:44:56.017499 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 20 15:45:02 crc kubenswrapper[4730]: I0320 15:45:02.513194 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567025-sp8pk"] Mar 20 15:45:02 crc kubenswrapper[4730]: E0320 15:45:02.513751 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 20 15:45:02 crc kubenswrapper[4730]: I0320 15:45:02.513767 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 20 15:45:02 crc kubenswrapper[4730]: I0320 15:45:02.513859 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 20 15:45:02 crc kubenswrapper[4730]: I0320 15:45:02.514195 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567025-sp8pk" Mar 20 15:45:02 crc kubenswrapper[4730]: I0320 15:45:02.517407 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 20 15:45:02 crc kubenswrapper[4730]: I0320 15:45:02.517519 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 20 15:45:02 crc kubenswrapper[4730]: I0320 15:45:02.525113 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567025-sp8pk"] Mar 20 15:45:02 crc kubenswrapper[4730]: I0320 15:45:02.648605 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxpvp\" (UniqueName: \"kubernetes.io/projected/db3d4357-8143-45e9-ab45-e55f54735cbc-kube-api-access-pxpvp\") pod \"collect-profiles-29567025-sp8pk\" (UID: \"db3d4357-8143-45e9-ab45-e55f54735cbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567025-sp8pk" Mar 20 15:45:02 crc kubenswrapper[4730]: I0320 15:45:02.648755 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db3d4357-8143-45e9-ab45-e55f54735cbc-secret-volume\") pod \"collect-profiles-29567025-sp8pk\" (UID: \"db3d4357-8143-45e9-ab45-e55f54735cbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567025-sp8pk" Mar 20 15:45:02 crc kubenswrapper[4730]: I0320 15:45:02.648779 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db3d4357-8143-45e9-ab45-e55f54735cbc-config-volume\") pod \"collect-profiles-29567025-sp8pk\" (UID: \"db3d4357-8143-45e9-ab45-e55f54735cbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567025-sp8pk" Mar 20 15:45:02 crc kubenswrapper[4730]: I0320 15:45:02.749909 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db3d4357-8143-45e9-ab45-e55f54735cbc-secret-volume\") pod \"collect-profiles-29567025-sp8pk\" (UID: \"db3d4357-8143-45e9-ab45-e55f54735cbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567025-sp8pk" Mar 20 15:45:02 crc kubenswrapper[4730]: I0320 15:45:02.749963 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db3d4357-8143-45e9-ab45-e55f54735cbc-config-volume\") pod \"collect-profiles-29567025-sp8pk\" (UID: \"db3d4357-8143-45e9-ab45-e55f54735cbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567025-sp8pk" Mar 20 15:45:02 crc kubenswrapper[4730]: I0320 15:45:02.750021 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxpvp\" (UniqueName: \"kubernetes.io/projected/db3d4357-8143-45e9-ab45-e55f54735cbc-kube-api-access-pxpvp\") pod \"collect-profiles-29567025-sp8pk\" (UID: \"db3d4357-8143-45e9-ab45-e55f54735cbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567025-sp8pk" Mar 20 15:45:02 crc kubenswrapper[4730]: I0320 15:45:02.752021 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db3d4357-8143-45e9-ab45-e55f54735cbc-config-volume\") pod \"collect-profiles-29567025-sp8pk\" (UID: \"db3d4357-8143-45e9-ab45-e55f54735cbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567025-sp8pk" Mar 20 15:45:02 crc kubenswrapper[4730]: I0320 15:45:02.755681 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db3d4357-8143-45e9-ab45-e55f54735cbc-secret-volume\") pod \"collect-profiles-29567025-sp8pk\" (UID: \"db3d4357-8143-45e9-ab45-e55f54735cbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567025-sp8pk" Mar 20 15:45:02 crc kubenswrapper[4730]: I0320 15:45:02.770077 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxpvp\" (UniqueName: \"kubernetes.io/projected/db3d4357-8143-45e9-ab45-e55f54735cbc-kube-api-access-pxpvp\") pod \"collect-profiles-29567025-sp8pk\" (UID: \"db3d4357-8143-45e9-ab45-e55f54735cbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567025-sp8pk" Mar 20 15:45:02 crc kubenswrapper[4730]: I0320 15:45:02.849491 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567025-sp8pk" Mar 20 15:45:03 crc kubenswrapper[4730]: I0320 15:45:03.247332 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567025-sp8pk"] Mar 20 15:45:03 crc kubenswrapper[4730]: W0320 15:45:03.252851 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb3d4357_8143_45e9_ab45_e55f54735cbc.slice/crio-40c162368944bc083f86ebaa26c5c852910422e4d4be629097632169724e2f9c WatchSource:0}: Error finding container 40c162368944bc083f86ebaa26c5c852910422e4d4be629097632169724e2f9c: Status 404 returned error can't find the container with id 40c162368944bc083f86ebaa26c5c852910422e4d4be629097632169724e2f9c Mar 20 15:45:03 crc kubenswrapper[4730]: I0320 15:45:03.332552 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567025-sp8pk" event={"ID":"db3d4357-8143-45e9-ab45-e55f54735cbc","Type":"ContainerStarted","Data":"40c162368944bc083f86ebaa26c5c852910422e4d4be629097632169724e2f9c"} Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.116548 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt"] Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.117039 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt" podUID="d180ebe8-2390-4a66-a6e0-1d02e256279a" containerName="controller-manager" containerID="cri-o://79c37e5d53d635bd788a7db0e9cd6a218a8e4fb2962907f08447c07acfcacfe6" gracePeriod=30 Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.213873 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj"] Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.214103 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj" podUID="ef9a1400-2cee-4019-b907-440c025638b6" containerName="route-controller-manager" containerID="cri-o://3ebe650c86674a5b830721481ad113ef2a916f54a0020fdc9ad70e3a263d5aa6" gracePeriod=30 Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.344150 4730 generic.go:334] "Generic (PLEG): container finished" podID="db3d4357-8143-45e9-ab45-e55f54735cbc" containerID="7961ca89ce2a460b127b00611370ac925492414c79b33a7aef5d34aaea8acb7f" exitCode=0 Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.344655 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567025-sp8pk" event={"ID":"db3d4357-8143-45e9-ab45-e55f54735cbc","Type":"ContainerDied","Data":"7961ca89ce2a460b127b00611370ac925492414c79b33a7aef5d34aaea8acb7f"} Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.346453 4730 generic.go:334] "Generic (PLEG): container finished" podID="d180ebe8-2390-4a66-a6e0-1d02e256279a" containerID="79c37e5d53d635bd788a7db0e9cd6a218a8e4fb2962907f08447c07acfcacfe6" exitCode=0 Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.346486 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt" event={"ID":"d180ebe8-2390-4a66-a6e0-1d02e256279a","Type":"ContainerDied","Data":"79c37e5d53d635bd788a7db0e9cd6a218a8e4fb2962907f08447c07acfcacfe6"} Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.348210 4730 generic.go:334] "Generic (PLEG): container finished" podID="ef9a1400-2cee-4019-b907-440c025638b6" containerID="3ebe650c86674a5b830721481ad113ef2a916f54a0020fdc9ad70e3a263d5aa6" exitCode=0 Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.348272 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj" event={"ID":"ef9a1400-2cee-4019-b907-440c025638b6","Type":"ContainerDied","Data":"3ebe650c86674a5b830721481ad113ef2a916f54a0020fdc9ad70e3a263d5aa6"} Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.657373 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj" Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.665408 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt" Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.775542 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef9a1400-2cee-4019-b907-440c025638b6-serving-cert\") pod \"ef9a1400-2cee-4019-b907-440c025638b6\" (UID: \"ef9a1400-2cee-4019-b907-440c025638b6\") " Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.775581 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d180ebe8-2390-4a66-a6e0-1d02e256279a-config\") pod \"d180ebe8-2390-4a66-a6e0-1d02e256279a\" (UID: \"d180ebe8-2390-4a66-a6e0-1d02e256279a\") " Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.775616 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpzmq\" (UniqueName: \"kubernetes.io/projected/ef9a1400-2cee-4019-b907-440c025638b6-kube-api-access-hpzmq\") pod \"ef9a1400-2cee-4019-b907-440c025638b6\" (UID: \"ef9a1400-2cee-4019-b907-440c025638b6\") " Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.775676 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d180ebe8-2390-4a66-a6e0-1d02e256279a-serving-cert\") pod \"d180ebe8-2390-4a66-a6e0-1d02e256279a\" (UID: \"d180ebe8-2390-4a66-a6e0-1d02e256279a\") " Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.775717 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef9a1400-2cee-4019-b907-440c025638b6-client-ca\") pod \"ef9a1400-2cee-4019-b907-440c025638b6\" (UID: \"ef9a1400-2cee-4019-b907-440c025638b6\") " Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.775743 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d180ebe8-2390-4a66-a6e0-1d02e256279a-client-ca\") pod \"d180ebe8-2390-4a66-a6e0-1d02e256279a\" (UID: \"d180ebe8-2390-4a66-a6e0-1d02e256279a\") " Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.775762 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjzm9\" (UniqueName: \"kubernetes.io/projected/d180ebe8-2390-4a66-a6e0-1d02e256279a-kube-api-access-sjzm9\") pod \"d180ebe8-2390-4a66-a6e0-1d02e256279a\" (UID: \"d180ebe8-2390-4a66-a6e0-1d02e256279a\") " Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.776323 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d180ebe8-2390-4a66-a6e0-1d02e256279a-client-ca" (OuterVolumeSpecName: "client-ca") pod "d180ebe8-2390-4a66-a6e0-1d02e256279a" (UID: "d180ebe8-2390-4a66-a6e0-1d02e256279a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.776419 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef9a1400-2cee-4019-b907-440c025638b6-config\") pod \"ef9a1400-2cee-4019-b907-440c025638b6\" (UID: \"ef9a1400-2cee-4019-b907-440c025638b6\") " Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.776445 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d180ebe8-2390-4a66-a6e0-1d02e256279a-proxy-ca-bundles\") pod \"d180ebe8-2390-4a66-a6e0-1d02e256279a\" (UID: \"d180ebe8-2390-4a66-a6e0-1d02e256279a\") " Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.776484 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d180ebe8-2390-4a66-a6e0-1d02e256279a-config" (OuterVolumeSpecName: "config") pod "d180ebe8-2390-4a66-a6e0-1d02e256279a" (UID: "d180ebe8-2390-4a66-a6e0-1d02e256279a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.776492 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef9a1400-2cee-4019-b907-440c025638b6-client-ca" (OuterVolumeSpecName: "client-ca") pod "ef9a1400-2cee-4019-b907-440c025638b6" (UID: "ef9a1400-2cee-4019-b907-440c025638b6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.776854 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef9a1400-2cee-4019-b907-440c025638b6-config" (OuterVolumeSpecName: "config") pod "ef9a1400-2cee-4019-b907-440c025638b6" (UID: "ef9a1400-2cee-4019-b907-440c025638b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.777038 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d180ebe8-2390-4a66-a6e0-1d02e256279a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d180ebe8-2390-4a66-a6e0-1d02e256279a" (UID: "d180ebe8-2390-4a66-a6e0-1d02e256279a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.777089 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d180ebe8-2390-4a66-a6e0-1d02e256279a-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.777101 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef9a1400-2cee-4019-b907-440c025638b6-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.777109 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d180ebe8-2390-4a66-a6e0-1d02e256279a-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.777118 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef9a1400-2cee-4019-b907-440c025638b6-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.781821 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d180ebe8-2390-4a66-a6e0-1d02e256279a-kube-api-access-sjzm9" (OuterVolumeSpecName: "kube-api-access-sjzm9") pod "d180ebe8-2390-4a66-a6e0-1d02e256279a" (UID: "d180ebe8-2390-4a66-a6e0-1d02e256279a"). InnerVolumeSpecName "kube-api-access-sjzm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.781863 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef9a1400-2cee-4019-b907-440c025638b6-kube-api-access-hpzmq" (OuterVolumeSpecName: "kube-api-access-hpzmq") pod "ef9a1400-2cee-4019-b907-440c025638b6" (UID: "ef9a1400-2cee-4019-b907-440c025638b6"). InnerVolumeSpecName "kube-api-access-hpzmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.782395 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef9a1400-2cee-4019-b907-440c025638b6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ef9a1400-2cee-4019-b907-440c025638b6" (UID: "ef9a1400-2cee-4019-b907-440c025638b6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.785332 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d180ebe8-2390-4a66-a6e0-1d02e256279a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d180ebe8-2390-4a66-a6e0-1d02e256279a" (UID: "d180ebe8-2390-4a66-a6e0-1d02e256279a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.878578 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef9a1400-2cee-4019-b907-440c025638b6-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.878624 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpzmq\" (UniqueName: \"kubernetes.io/projected/ef9a1400-2cee-4019-b907-440c025638b6-kube-api-access-hpzmq\") on node \"crc\" DevicePath \"\"" Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.878637 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d180ebe8-2390-4a66-a6e0-1d02e256279a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.878647 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjzm9\" (UniqueName: \"kubernetes.io/projected/d180ebe8-2390-4a66-a6e0-1d02e256279a-kube-api-access-sjzm9\") on node \"crc\" DevicePath \"\"" Mar 20 15:45:04 crc kubenswrapper[4730]: I0320 15:45:04.878658 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d180ebe8-2390-4a66-a6e0-1d02e256279a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.241993 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5855d88fdc-qmhg5"] Mar 20 15:45:05 crc kubenswrapper[4730]: E0320 15:45:05.242265 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef9a1400-2cee-4019-b907-440c025638b6" containerName="route-controller-manager" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.242281 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef9a1400-2cee-4019-b907-440c025638b6" containerName="route-controller-manager" Mar 20 15:45:05 crc kubenswrapper[4730]: E0320 15:45:05.242298 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d180ebe8-2390-4a66-a6e0-1d02e256279a" containerName="controller-manager" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.242305 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d180ebe8-2390-4a66-a6e0-1d02e256279a" containerName="controller-manager" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.242429 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="d180ebe8-2390-4a66-a6e0-1d02e256279a" containerName="controller-manager" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.242443 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef9a1400-2cee-4019-b907-440c025638b6" containerName="route-controller-manager" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.242838 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5855d88fdc-qmhg5" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.258121 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5855d88fdc-qmhg5"] Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.358553 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj" event={"ID":"ef9a1400-2cee-4019-b907-440c025638b6","Type":"ContainerDied","Data":"dc3dc13f49503d3d06bcb012aeab946187b3c373989eaa5abd5e06eb7c51d310"} Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.358571 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.358619 4730 scope.go:117] "RemoveContainer" containerID="3ebe650c86674a5b830721481ad113ef2a916f54a0020fdc9ad70e3a263d5aa6" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.360355 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.362316 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt" event={"ID":"d180ebe8-2390-4a66-a6e0-1d02e256279a","Type":"ContainerDied","Data":"1f20c0c6cb3f095a9ebf090bce89d3cdbdc4242e46cb281771f335b8d4272464"} Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.383064 4730 scope.go:117] "RemoveContainer" containerID="79c37e5d53d635bd788a7db0e9cd6a218a8e4fb2962907f08447c07acfcacfe6" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.386179 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c4a9957b-61b8-4536-ae91-fe1f1c12575f-proxy-ca-bundles\") pod \"controller-manager-5855d88fdc-qmhg5\" (UID: \"c4a9957b-61b8-4536-ae91-fe1f1c12575f\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-qmhg5" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.386276 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4a9957b-61b8-4536-ae91-fe1f1c12575f-config\") pod \"controller-manager-5855d88fdc-qmhg5\" (UID: \"c4a9957b-61b8-4536-ae91-fe1f1c12575f\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-qmhg5" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.386310 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a9957b-61b8-4536-ae91-fe1f1c12575f-serving-cert\") pod \"controller-manager-5855d88fdc-qmhg5\" (UID: \"c4a9957b-61b8-4536-ae91-fe1f1c12575f\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-qmhg5" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.386440 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4a9957b-61b8-4536-ae91-fe1f1c12575f-client-ca\") pod \"controller-manager-5855d88fdc-qmhg5\" (UID: \"c4a9957b-61b8-4536-ae91-fe1f1c12575f\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-qmhg5" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.386478 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj57d\" (UniqueName: \"kubernetes.io/projected/c4a9957b-61b8-4536-ae91-fe1f1c12575f-kube-api-access-nj57d\") pod \"controller-manager-5855d88fdc-qmhg5\" (UID: \"c4a9957b-61b8-4536-ae91-fe1f1c12575f\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-qmhg5" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.397586 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt"] Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.404165 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-86dfd77dcd-rv2mt"] Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.411230 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj"] Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.419400 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-754f74fd94-6mnkj"] Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.489435 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c4a9957b-61b8-4536-ae91-fe1f1c12575f-proxy-ca-bundles\") pod \"controller-manager-5855d88fdc-qmhg5\" (UID: \"c4a9957b-61b8-4536-ae91-fe1f1c12575f\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-qmhg5" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.489481 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4a9957b-61b8-4536-ae91-fe1f1c12575f-config\") pod \"controller-manager-5855d88fdc-qmhg5\" (UID: \"c4a9957b-61b8-4536-ae91-fe1f1c12575f\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-qmhg5" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.489510 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a9957b-61b8-4536-ae91-fe1f1c12575f-serving-cert\") pod \"controller-manager-5855d88fdc-qmhg5\" (UID: \"c4a9957b-61b8-4536-ae91-fe1f1c12575f\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-qmhg5" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.489570 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4a9957b-61b8-4536-ae91-fe1f1c12575f-client-ca\") pod \"controller-manager-5855d88fdc-qmhg5\" (UID: \"c4a9957b-61b8-4536-ae91-fe1f1c12575f\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-qmhg5" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.489602 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj57d\" (UniqueName: \"kubernetes.io/projected/c4a9957b-61b8-4536-ae91-fe1f1c12575f-kube-api-access-nj57d\") pod \"controller-manager-5855d88fdc-qmhg5\" (UID: \"c4a9957b-61b8-4536-ae91-fe1f1c12575f\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-qmhg5" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.490848 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4a9957b-61b8-4536-ae91-fe1f1c12575f-client-ca\") pod \"controller-manager-5855d88fdc-qmhg5\" (UID: \"c4a9957b-61b8-4536-ae91-fe1f1c12575f\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-qmhg5" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.492072 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4a9957b-61b8-4536-ae91-fe1f1c12575f-config\") pod \"controller-manager-5855d88fdc-qmhg5\" (UID: \"c4a9957b-61b8-4536-ae91-fe1f1c12575f\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-qmhg5" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.492451 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c4a9957b-61b8-4536-ae91-fe1f1c12575f-proxy-ca-bundles\") pod \"controller-manager-5855d88fdc-qmhg5\" (UID: \"c4a9957b-61b8-4536-ae91-fe1f1c12575f\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-qmhg5" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.494784 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a9957b-61b8-4536-ae91-fe1f1c12575f-serving-cert\") pod \"controller-manager-5855d88fdc-qmhg5\" (UID: \"c4a9957b-61b8-4536-ae91-fe1f1c12575f\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-qmhg5" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.511167 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj57d\" (UniqueName: \"kubernetes.io/projected/c4a9957b-61b8-4536-ae91-fe1f1c12575f-kube-api-access-nj57d\") pod \"controller-manager-5855d88fdc-qmhg5\" (UID: \"c4a9957b-61b8-4536-ae91-fe1f1c12575f\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-qmhg5" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.541441 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d180ebe8-2390-4a66-a6e0-1d02e256279a" path="/var/lib/kubelet/pods/d180ebe8-2390-4a66-a6e0-1d02e256279a/volumes" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.541996 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef9a1400-2cee-4019-b907-440c025638b6" path="/var/lib/kubelet/pods/ef9a1400-2cee-4019-b907-440c025638b6/volumes" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.555922 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567025-sp8pk" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.557145 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5855d88fdc-qmhg5" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.693301 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db3d4357-8143-45e9-ab45-e55f54735cbc-secret-volume\") pod \"db3d4357-8143-45e9-ab45-e55f54735cbc\" (UID: \"db3d4357-8143-45e9-ab45-e55f54735cbc\") " Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.693699 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxpvp\" (UniqueName: \"kubernetes.io/projected/db3d4357-8143-45e9-ab45-e55f54735cbc-kube-api-access-pxpvp\") pod \"db3d4357-8143-45e9-ab45-e55f54735cbc\" (UID: \"db3d4357-8143-45e9-ab45-e55f54735cbc\") " Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.693735 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db3d4357-8143-45e9-ab45-e55f54735cbc-config-volume\") pod \"db3d4357-8143-45e9-ab45-e55f54735cbc\" (UID: \"db3d4357-8143-45e9-ab45-e55f54735cbc\") " Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.694677 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db3d4357-8143-45e9-ab45-e55f54735cbc-config-volume" (OuterVolumeSpecName: "config-volume") pod "db3d4357-8143-45e9-ab45-e55f54735cbc" (UID: "db3d4357-8143-45e9-ab45-e55f54735cbc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.698845 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db3d4357-8143-45e9-ab45-e55f54735cbc-kube-api-access-pxpvp" (OuterVolumeSpecName: "kube-api-access-pxpvp") pod "db3d4357-8143-45e9-ab45-e55f54735cbc" (UID: "db3d4357-8143-45e9-ab45-e55f54735cbc"). InnerVolumeSpecName "kube-api-access-pxpvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.702799 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db3d4357-8143-45e9-ab45-e55f54735cbc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "db3d4357-8143-45e9-ab45-e55f54735cbc" (UID: "db3d4357-8143-45e9-ab45-e55f54735cbc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.794997 4730 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db3d4357-8143-45e9-ab45-e55f54735cbc-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.795036 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxpvp\" (UniqueName: \"kubernetes.io/projected/db3d4357-8143-45e9-ab45-e55f54735cbc-kube-api-access-pxpvp\") on node \"crc\" DevicePath \"\"" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.795050 4730 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db3d4357-8143-45e9-ab45-e55f54735cbc-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 15:45:05 crc kubenswrapper[4730]: I0320 15:45:05.986788 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5855d88fdc-qmhg5"] Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.241988 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc"] Mar 20 15:45:06 crc kubenswrapper[4730]: E0320 15:45:06.242439 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db3d4357-8143-45e9-ab45-e55f54735cbc" containerName="collect-profiles" Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.242454 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="db3d4357-8143-45e9-ab45-e55f54735cbc" containerName="collect-profiles" Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.242575 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="db3d4357-8143-45e9-ab45-e55f54735cbc" containerName="collect-profiles" Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.242992 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc" Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.245973 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.245986 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.246036 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.246194 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.246334 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.246729 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.253362 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc"] Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.367981 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5855d88fdc-qmhg5" event={"ID":"c4a9957b-61b8-4536-ae91-fe1f1c12575f","Type":"ContainerStarted","Data":"272f9abedbf05198cbcea3f3c4c9c1a9c6f254ef663ed65fee9f8391c9210b16"} Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.368020 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5855d88fdc-qmhg5" event={"ID":"c4a9957b-61b8-4536-ae91-fe1f1c12575f","Type":"ContainerStarted","Data":"85975510d8507912acb9c3c9e341eb489a6bfcf769bf4be7878158b90105c935"} Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.368270 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5855d88fdc-qmhg5" Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.370261 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567025-sp8pk" event={"ID":"db3d4357-8143-45e9-ab45-e55f54735cbc","Type":"ContainerDied","Data":"40c162368944bc083f86ebaa26c5c852910422e4d4be629097632169724e2f9c"} Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.370303 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40c162368944bc083f86ebaa26c5c852910422e4d4be629097632169724e2f9c" Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.370319 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567025-sp8pk" Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.397839 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5855d88fdc-qmhg5" Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.403268 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5855d88fdc-qmhg5" podStartSLOduration=2.403231178 podStartE2EDuration="2.403231178s" podCreationTimestamp="2026-03-20 15:45:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:45:06.399486264 +0000 UTC m=+365.612857643" watchObservedRunningTime="2026-03-20 15:45:06.403231178 +0000 UTC m=+365.616602547" Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.405968 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b8a480c-69ef-49fc-83a5-4cd052de69f9-config\") pod \"route-controller-manager-79bc85d457-rb8tc\" (UID: \"5b8a480c-69ef-49fc-83a5-4cd052de69f9\") " pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc" Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.406081 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b8a480c-69ef-49fc-83a5-4cd052de69f9-client-ca\") pod \"route-controller-manager-79bc85d457-rb8tc\" (UID: \"5b8a480c-69ef-49fc-83a5-4cd052de69f9\") " pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc" Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.406214 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b8a480c-69ef-49fc-83a5-4cd052de69f9-serving-cert\") pod \"route-controller-manager-79bc85d457-rb8tc\" (UID: \"5b8a480c-69ef-49fc-83a5-4cd052de69f9\") " pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc" Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.406399 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzs7x\" (UniqueName: \"kubernetes.io/projected/5b8a480c-69ef-49fc-83a5-4cd052de69f9-kube-api-access-wzs7x\") pod \"route-controller-manager-79bc85d457-rb8tc\" (UID: \"5b8a480c-69ef-49fc-83a5-4cd052de69f9\") " pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc" Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.508002 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzs7x\" (UniqueName: \"kubernetes.io/projected/5b8a480c-69ef-49fc-83a5-4cd052de69f9-kube-api-access-wzs7x\") pod \"route-controller-manager-79bc85d457-rb8tc\" (UID: \"5b8a480c-69ef-49fc-83a5-4cd052de69f9\") " pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc" Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.508155 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b8a480c-69ef-49fc-83a5-4cd052de69f9-config\") pod \"route-controller-manager-79bc85d457-rb8tc\" (UID: \"5b8a480c-69ef-49fc-83a5-4cd052de69f9\") " pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc" Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.508178 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b8a480c-69ef-49fc-83a5-4cd052de69f9-client-ca\") pod \"route-controller-manager-79bc85d457-rb8tc\" (UID: \"5b8a480c-69ef-49fc-83a5-4cd052de69f9\") " pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc" Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.508215 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b8a480c-69ef-49fc-83a5-4cd052de69f9-serving-cert\") pod \"route-controller-manager-79bc85d457-rb8tc\" (UID: \"5b8a480c-69ef-49fc-83a5-4cd052de69f9\") " pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc" Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.509342 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b8a480c-69ef-49fc-83a5-4cd052de69f9-client-ca\") pod \"route-controller-manager-79bc85d457-rb8tc\" (UID: \"5b8a480c-69ef-49fc-83a5-4cd052de69f9\") " pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc" Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.509933 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b8a480c-69ef-49fc-83a5-4cd052de69f9-config\") pod \"route-controller-manager-79bc85d457-rb8tc\" (UID: \"5b8a480c-69ef-49fc-83a5-4cd052de69f9\") " pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc" Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.511821 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b8a480c-69ef-49fc-83a5-4cd052de69f9-serving-cert\") pod \"route-controller-manager-79bc85d457-rb8tc\" (UID: \"5b8a480c-69ef-49fc-83a5-4cd052de69f9\") " pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc" Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.526295 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzs7x\" (UniqueName: \"kubernetes.io/projected/5b8a480c-69ef-49fc-83a5-4cd052de69f9-kube-api-access-wzs7x\") pod \"route-controller-manager-79bc85d457-rb8tc\" (UID: \"5b8a480c-69ef-49fc-83a5-4cd052de69f9\") " pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc" Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.562825 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc" Mar 20 15:45:06 crc kubenswrapper[4730]: I0320 15:45:06.954043 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc"] Mar 20 15:45:06 crc kubenswrapper[4730]: W0320 15:45:06.965736 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b8a480c_69ef_49fc_83a5_4cd052de69f9.slice/crio-86f0fd37bd61a9248f93bc6cef29889e8fda36907bdf0b815ae3f15147d600a1 WatchSource:0}: Error finding container 86f0fd37bd61a9248f93bc6cef29889e8fda36907bdf0b815ae3f15147d600a1: Status 404 returned error can't find the container with id 86f0fd37bd61a9248f93bc6cef29889e8fda36907bdf0b815ae3f15147d600a1 Mar 20 15:45:07 crc kubenswrapper[4730]: I0320 15:45:07.376521 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc" event={"ID":"5b8a480c-69ef-49fc-83a5-4cd052de69f9","Type":"ContainerStarted","Data":"37e041be660c1bba7e1df6b27c21e4558d7b932a4f7c1f474df6579e15113027"} Mar 20 15:45:07 crc kubenswrapper[4730]: I0320 15:45:07.376575 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc" event={"ID":"5b8a480c-69ef-49fc-83a5-4cd052de69f9","Type":"ContainerStarted","Data":"86f0fd37bd61a9248f93bc6cef29889e8fda36907bdf0b815ae3f15147d600a1"} Mar 20 15:45:07 crc kubenswrapper[4730]: I0320 15:45:07.395872 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc" podStartSLOduration=3.395856479 podStartE2EDuration="3.395856479s" podCreationTimestamp="2026-03-20 15:45:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:45:07.391642591 +0000 UTC m=+366.605013960" watchObservedRunningTime="2026-03-20 15:45:07.395856479 +0000 UTC m=+366.609227848" Mar 20 15:45:08 crc kubenswrapper[4730]: I0320 15:45:08.381998 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc" Mar 20 15:45:08 crc kubenswrapper[4730]: I0320 15:45:08.387290 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc" Mar 20 15:45:12 crc kubenswrapper[4730]: I0320 15:45:12.406164 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Mar 20 15:45:12 crc kubenswrapper[4730]: I0320 15:45:12.408147 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 20 15:45:12 crc kubenswrapper[4730]: I0320 15:45:12.408847 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 20 15:45:12 crc kubenswrapper[4730]: I0320 15:45:12.408910 4730 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e1f2bc557068d23d31fb94ba0b8755d440a6066e6a0d9e74613e4436d64e826e" exitCode=137 Mar 20 15:45:12 crc kubenswrapper[4730]: I0320 15:45:12.408948 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e1f2bc557068d23d31fb94ba0b8755d440a6066e6a0d9e74613e4436d64e826e"} Mar 20 15:45:12 crc kubenswrapper[4730]: I0320 15:45:12.408985 4730 scope.go:117] "RemoveContainer" containerID="1aee2dcf43ecf6df4a1615aa6e468921053ccb529d3c6dbc2c2ad641e264e606" Mar 20 15:45:13 crc kubenswrapper[4730]: I0320 15:45:13.416652 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Mar 20 15:45:13 crc kubenswrapper[4730]: I0320 15:45:13.418172 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 20 15:45:13 crc kubenswrapper[4730]: I0320 15:45:13.418235 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c7a4751e0ccbddb326de2ce02b8632520e3bc55d3267079a122c3d02126536b6"} Mar 20 15:45:16 crc kubenswrapper[4730]: I0320 15:45:16.433777 4730 generic.go:334] "Generic (PLEG): container finished" podID="e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3" containerID="53a23df451c80721daf3b80414ff05d019adbd298f30cf30f417f8af1c2bafc2" exitCode=0 Mar 20 15:45:16 crc kubenswrapper[4730]: I0320 15:45:16.433929 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-klbh8" event={"ID":"e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3","Type":"ContainerDied","Data":"53a23df451c80721daf3b80414ff05d019adbd298f30cf30f417f8af1c2bafc2"} Mar 20 15:45:16 crc kubenswrapper[4730]: I0320 15:45:16.434713 4730 scope.go:117] "RemoveContainer" containerID="53a23df451c80721daf3b80414ff05d019adbd298f30cf30f417f8af1c2bafc2" Mar 20 15:45:17 crc kubenswrapper[4730]: I0320 15:45:17.291641 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 15:45:17 crc kubenswrapper[4730]: I0320 15:45:17.439693 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-klbh8" event={"ID":"e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3","Type":"ContainerStarted","Data":"753a6699245a34bcfcd2383bf2298b09146bcf7e23ec3ecf85f51c516941cd06"} Mar 20 15:45:17 crc kubenswrapper[4730]: I0320 15:45:17.440537 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-klbh8" Mar 20 15:45:17 crc kubenswrapper[4730]: I0320 15:45:17.442562 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-klbh8" Mar 20 15:45:22 crc kubenswrapper[4730]: I0320 15:45:22.118713 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 15:45:22 crc kubenswrapper[4730]: I0320 15:45:22.129950 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 15:45:22 crc kubenswrapper[4730]: I0320 15:45:22.468339 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 15:45:29 crc kubenswrapper[4730]: I0320 15:45:29.798911 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5855d88fdc-qmhg5"] Mar 20 15:45:29 crc kubenswrapper[4730]: I0320 15:45:29.799693 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5855d88fdc-qmhg5" podUID="c4a9957b-61b8-4536-ae91-fe1f1c12575f" containerName="controller-manager" containerID="cri-o://272f9abedbf05198cbcea3f3c4c9c1a9c6f254ef663ed65fee9f8391c9210b16" gracePeriod=30 Mar 20 15:45:29 crc kubenswrapper[4730]: I0320 15:45:29.803963 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc"] Mar 20 15:45:29 crc kubenswrapper[4730]: I0320 15:45:29.811298 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc" podUID="5b8a480c-69ef-49fc-83a5-4cd052de69f9" containerName="route-controller-manager" containerID="cri-o://37e041be660c1bba7e1df6b27c21e4558d7b932a4f7c1f474df6579e15113027" gracePeriod=30 Mar 20 15:45:30 crc kubenswrapper[4730]: I0320 15:45:30.369078 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc" Mar 20 15:45:30 crc kubenswrapper[4730]: I0320 15:45:30.517397 4730 generic.go:334] "Generic (PLEG): container finished" podID="c4a9957b-61b8-4536-ae91-fe1f1c12575f" containerID="272f9abedbf05198cbcea3f3c4c9c1a9c6f254ef663ed65fee9f8391c9210b16" exitCode=0 Mar 20 15:45:30 crc kubenswrapper[4730]: I0320 15:45:30.517436 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5855d88fdc-qmhg5" event={"ID":"c4a9957b-61b8-4536-ae91-fe1f1c12575f","Type":"ContainerDied","Data":"272f9abedbf05198cbcea3f3c4c9c1a9c6f254ef663ed65fee9f8391c9210b16"} Mar 20 15:45:30 crc kubenswrapper[4730]: I0320 15:45:30.518444 4730 generic.go:334] "Generic (PLEG): container finished" podID="5b8a480c-69ef-49fc-83a5-4cd052de69f9" containerID="37e041be660c1bba7e1df6b27c21e4558d7b932a4f7c1f474df6579e15113027" exitCode=0 Mar 20 15:45:30 crc kubenswrapper[4730]: I0320 15:45:30.518472 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc" event={"ID":"5b8a480c-69ef-49fc-83a5-4cd052de69f9","Type":"ContainerDied","Data":"37e041be660c1bba7e1df6b27c21e4558d7b932a4f7c1f474df6579e15113027"} Mar 20 15:45:30 crc kubenswrapper[4730]: I0320 15:45:30.518491 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc" event={"ID":"5b8a480c-69ef-49fc-83a5-4cd052de69f9","Type":"ContainerDied","Data":"86f0fd37bd61a9248f93bc6cef29889e8fda36907bdf0b815ae3f15147d600a1"} Mar 20 15:45:30 crc kubenswrapper[4730]: I0320 15:45:30.518493 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc" Mar 20 15:45:30 crc kubenswrapper[4730]: I0320 15:45:30.518576 4730 scope.go:117] "RemoveContainer" containerID="37e041be660c1bba7e1df6b27c21e4558d7b932a4f7c1f474df6579e15113027" Mar 20 15:45:30 crc kubenswrapper[4730]: I0320 15:45:30.537024 4730 scope.go:117] "RemoveContainer" containerID="37e041be660c1bba7e1df6b27c21e4558d7b932a4f7c1f474df6579e15113027" Mar 20 15:45:30 crc kubenswrapper[4730]: E0320 15:45:30.537482 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37e041be660c1bba7e1df6b27c21e4558d7b932a4f7c1f474df6579e15113027\": container with ID starting with 37e041be660c1bba7e1df6b27c21e4558d7b932a4f7c1f474df6579e15113027 not found: ID does not exist" containerID="37e041be660c1bba7e1df6b27c21e4558d7b932a4f7c1f474df6579e15113027" Mar 20 15:45:30 crc kubenswrapper[4730]: I0320 15:45:30.537624 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37e041be660c1bba7e1df6b27c21e4558d7b932a4f7c1f474df6579e15113027"} err="failed to get container status \"37e041be660c1bba7e1df6b27c21e4558d7b932a4f7c1f474df6579e15113027\": rpc error: code = NotFound desc = could not find container \"37e041be660c1bba7e1df6b27c21e4558d7b932a4f7c1f474df6579e15113027\": container with ID starting with 37e041be660c1bba7e1df6b27c21e4558d7b932a4f7c1f474df6579e15113027 not found: ID does not exist" Mar 20 15:45:30 crc kubenswrapper[4730]: I0320 15:45:30.557590 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b8a480c-69ef-49fc-83a5-4cd052de69f9-serving-cert\") pod \"5b8a480c-69ef-49fc-83a5-4cd052de69f9\" (UID: \"5b8a480c-69ef-49fc-83a5-4cd052de69f9\") " Mar 20 15:45:30 crc kubenswrapper[4730]: I0320 15:45:30.557631 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b8a480c-69ef-49fc-83a5-4cd052de69f9-config\") pod \"5b8a480c-69ef-49fc-83a5-4cd052de69f9\" (UID: \"5b8a480c-69ef-49fc-83a5-4cd052de69f9\") " Mar 20 15:45:30 crc kubenswrapper[4730]: I0320 15:45:30.557692 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzs7x\" (UniqueName: \"kubernetes.io/projected/5b8a480c-69ef-49fc-83a5-4cd052de69f9-kube-api-access-wzs7x\") pod \"5b8a480c-69ef-49fc-83a5-4cd052de69f9\" (UID: \"5b8a480c-69ef-49fc-83a5-4cd052de69f9\") " Mar 20 15:45:30 crc kubenswrapper[4730]: I0320 15:45:30.557718 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b8a480c-69ef-49fc-83a5-4cd052de69f9-client-ca\") pod \"5b8a480c-69ef-49fc-83a5-4cd052de69f9\" (UID: \"5b8a480c-69ef-49fc-83a5-4cd052de69f9\") " Mar 20 15:45:30 crc kubenswrapper[4730]: I0320 15:45:30.558476 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b8a480c-69ef-49fc-83a5-4cd052de69f9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5b8a480c-69ef-49fc-83a5-4cd052de69f9" (UID: "5b8a480c-69ef-49fc-83a5-4cd052de69f9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:45:30 crc kubenswrapper[4730]: I0320 15:45:30.558520 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b8a480c-69ef-49fc-83a5-4cd052de69f9-config" (OuterVolumeSpecName: "config") pod "5b8a480c-69ef-49fc-83a5-4cd052de69f9" (UID: "5b8a480c-69ef-49fc-83a5-4cd052de69f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:45:30 crc kubenswrapper[4730]: I0320 15:45:30.563689 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b8a480c-69ef-49fc-83a5-4cd052de69f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5b8a480c-69ef-49fc-83a5-4cd052de69f9" (UID: "5b8a480c-69ef-49fc-83a5-4cd052de69f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:45:30 crc kubenswrapper[4730]: I0320 15:45:30.564151 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b8a480c-69ef-49fc-83a5-4cd052de69f9-kube-api-access-wzs7x" (OuterVolumeSpecName: "kube-api-access-wzs7x") pod "5b8a480c-69ef-49fc-83a5-4cd052de69f9" (UID: "5b8a480c-69ef-49fc-83a5-4cd052de69f9"). InnerVolumeSpecName "kube-api-access-wzs7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:45:30 crc kubenswrapper[4730]: I0320 15:45:30.659854 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzs7x\" (UniqueName: \"kubernetes.io/projected/5b8a480c-69ef-49fc-83a5-4cd052de69f9-kube-api-access-wzs7x\") on node \"crc\" DevicePath \"\"" Mar 20 15:45:30 crc kubenswrapper[4730]: I0320 15:45:30.659886 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b8a480c-69ef-49fc-83a5-4cd052de69f9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:45:30 crc kubenswrapper[4730]: I0320 15:45:30.659896 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b8a480c-69ef-49fc-83a5-4cd052de69f9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:45:30 crc kubenswrapper[4730]: I0320 15:45:30.659905 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b8a480c-69ef-49fc-83a5-4cd052de69f9-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:45:30 crc kubenswrapper[4730]: I0320 15:45:30.841314 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc"] Mar 20 15:45:30 crc kubenswrapper[4730]: I0320 15:45:30.846358 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79bc85d457-rb8tc"] Mar 20 15:45:30 crc kubenswrapper[4730]: I0320 15:45:30.950276 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5855d88fdc-qmhg5" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.065237 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4a9957b-61b8-4536-ae91-fe1f1c12575f-config\") pod \"c4a9957b-61b8-4536-ae91-fe1f1c12575f\" (UID: \"c4a9957b-61b8-4536-ae91-fe1f1c12575f\") " Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.065511 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4a9957b-61b8-4536-ae91-fe1f1c12575f-client-ca\") pod \"c4a9957b-61b8-4536-ae91-fe1f1c12575f\" (UID: \"c4a9957b-61b8-4536-ae91-fe1f1c12575f\") " Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.065535 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj57d\" (UniqueName: \"kubernetes.io/projected/c4a9957b-61b8-4536-ae91-fe1f1c12575f-kube-api-access-nj57d\") pod \"c4a9957b-61b8-4536-ae91-fe1f1c12575f\" (UID: \"c4a9957b-61b8-4536-ae91-fe1f1c12575f\") " Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.065556 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c4a9957b-61b8-4536-ae91-fe1f1c12575f-proxy-ca-bundles\") pod \"c4a9957b-61b8-4536-ae91-fe1f1c12575f\" (UID: \"c4a9957b-61b8-4536-ae91-fe1f1c12575f\") " Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.065646 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a9957b-61b8-4536-ae91-fe1f1c12575f-serving-cert\") pod \"c4a9957b-61b8-4536-ae91-fe1f1c12575f\" (UID: \"c4a9957b-61b8-4536-ae91-fe1f1c12575f\") " Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.066467 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4a9957b-61b8-4536-ae91-fe1f1c12575f-config" (OuterVolumeSpecName: "config") pod "c4a9957b-61b8-4536-ae91-fe1f1c12575f" (UID: "c4a9957b-61b8-4536-ae91-fe1f1c12575f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.066504 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4a9957b-61b8-4536-ae91-fe1f1c12575f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c4a9957b-61b8-4536-ae91-fe1f1c12575f" (UID: "c4a9957b-61b8-4536-ae91-fe1f1c12575f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.066679 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4a9957b-61b8-4536-ae91-fe1f1c12575f-client-ca" (OuterVolumeSpecName: "client-ca") pod "c4a9957b-61b8-4536-ae91-fe1f1c12575f" (UID: "c4a9957b-61b8-4536-ae91-fe1f1c12575f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.068709 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4a9957b-61b8-4536-ae91-fe1f1c12575f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c4a9957b-61b8-4536-ae91-fe1f1c12575f" (UID: "c4a9957b-61b8-4536-ae91-fe1f1c12575f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.069394 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4a9957b-61b8-4536-ae91-fe1f1c12575f-kube-api-access-nj57d" (OuterVolumeSpecName: "kube-api-access-nj57d") pod "c4a9957b-61b8-4536-ae91-fe1f1c12575f" (UID: "c4a9957b-61b8-4536-ae91-fe1f1c12575f"). InnerVolumeSpecName "kube-api-access-nj57d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.166806 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4a9957b-61b8-4536-ae91-fe1f1c12575f-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.166864 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4a9957b-61b8-4536-ae91-fe1f1c12575f-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.166878 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj57d\" (UniqueName: \"kubernetes.io/projected/c4a9957b-61b8-4536-ae91-fe1f1c12575f-kube-api-access-nj57d\") on node \"crc\" DevicePath \"\"" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.166893 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c4a9957b-61b8-4536-ae91-fe1f1c12575f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.166906 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a9957b-61b8-4536-ae91-fe1f1c12575f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.266436 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn"] Mar 20 15:45:31 crc kubenswrapper[4730]: E0320 15:45:31.266735 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b8a480c-69ef-49fc-83a5-4cd052de69f9" containerName="route-controller-manager" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.266755 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b8a480c-69ef-49fc-83a5-4cd052de69f9" containerName="route-controller-manager" Mar 20 15:45:31 crc kubenswrapper[4730]: E0320 15:45:31.266767 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4a9957b-61b8-4536-ae91-fe1f1c12575f" containerName="controller-manager" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.266776 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4a9957b-61b8-4536-ae91-fe1f1c12575f" containerName="controller-manager" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.266912 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b8a480c-69ef-49fc-83a5-4cd052de69f9" containerName="route-controller-manager" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.266925 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4a9957b-61b8-4536-ae91-fe1f1c12575f" containerName="controller-manager" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.267483 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.268029 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6f367c6-14a0-4683-aff1-d6fbe6cc3a47-config\") pod \"route-controller-manager-887974f98-g5dnn\" (UID: \"f6f367c6-14a0-4683-aff1-d6fbe6cc3a47\") " pod="openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.268085 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6f367c6-14a0-4683-aff1-d6fbe6cc3a47-serving-cert\") pod \"route-controller-manager-887974f98-g5dnn\" (UID: \"f6f367c6-14a0-4683-aff1-d6fbe6cc3a47\") " pod="openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.268114 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbk4s\" (UniqueName: \"kubernetes.io/projected/f6f367c6-14a0-4683-aff1-d6fbe6cc3a47-kube-api-access-rbk4s\") pod \"route-controller-manager-887974f98-g5dnn\" (UID: \"f6f367c6-14a0-4683-aff1-d6fbe6cc3a47\") " pod="openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.268292 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6f367c6-14a0-4683-aff1-d6fbe6cc3a47-client-ca\") pod \"route-controller-manager-887974f98-g5dnn\" (UID: \"f6f367c6-14a0-4683-aff1-d6fbe6cc3a47\") " pod="openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.272100 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6677d585cb-h45gr"] Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.272945 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6677d585cb-h45gr" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.273622 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.273895 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.274216 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.274591 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.276020 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.276200 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.279654 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn"] Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.283767 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6677d585cb-h45gr"] Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.370075 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58959cc0-9ad2-48f9-83da-c33525b8919d-config\") pod \"controller-manager-6677d585cb-h45gr\" (UID: \"58959cc0-9ad2-48f9-83da-c33525b8919d\") " pod="openshift-controller-manager/controller-manager-6677d585cb-h45gr" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.370181 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6f367c6-14a0-4683-aff1-d6fbe6cc3a47-config\") pod \"route-controller-manager-887974f98-g5dnn\" (UID: \"f6f367c6-14a0-4683-aff1-d6fbe6cc3a47\") " pod="openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.370210 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58959cc0-9ad2-48f9-83da-c33525b8919d-client-ca\") pod \"controller-manager-6677d585cb-h45gr\" (UID: \"58959cc0-9ad2-48f9-83da-c33525b8919d\") " pod="openshift-controller-manager/controller-manager-6677d585cb-h45gr" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.370239 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6f367c6-14a0-4683-aff1-d6fbe6cc3a47-serving-cert\") pod \"route-controller-manager-887974f98-g5dnn\" (UID: \"f6f367c6-14a0-4683-aff1-d6fbe6cc3a47\") " pod="openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.370336 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbk4s\" (UniqueName: \"kubernetes.io/projected/f6f367c6-14a0-4683-aff1-d6fbe6cc3a47-kube-api-access-rbk4s\") pod \"route-controller-manager-887974f98-g5dnn\" (UID: \"f6f367c6-14a0-4683-aff1-d6fbe6cc3a47\") " pod="openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.370820 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58959cc0-9ad2-48f9-83da-c33525b8919d-serving-cert\") pod \"controller-manager-6677d585cb-h45gr\" (UID: \"58959cc0-9ad2-48f9-83da-c33525b8919d\") " pod="openshift-controller-manager/controller-manager-6677d585cb-h45gr" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.370875 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2ml4\" (UniqueName: \"kubernetes.io/projected/58959cc0-9ad2-48f9-83da-c33525b8919d-kube-api-access-k2ml4\") pod \"controller-manager-6677d585cb-h45gr\" (UID: \"58959cc0-9ad2-48f9-83da-c33525b8919d\") " pod="openshift-controller-manager/controller-manager-6677d585cb-h45gr" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.370896 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/58959cc0-9ad2-48f9-83da-c33525b8919d-proxy-ca-bundles\") pod \"controller-manager-6677d585cb-h45gr\" (UID: \"58959cc0-9ad2-48f9-83da-c33525b8919d\") " pod="openshift-controller-manager/controller-manager-6677d585cb-h45gr" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.370987 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6f367c6-14a0-4683-aff1-d6fbe6cc3a47-client-ca\") pod \"route-controller-manager-887974f98-g5dnn\" (UID: \"f6f367c6-14a0-4683-aff1-d6fbe6cc3a47\") " pod="openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.371727 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6f367c6-14a0-4683-aff1-d6fbe6cc3a47-client-ca\") pod \"route-controller-manager-887974f98-g5dnn\" (UID: \"f6f367c6-14a0-4683-aff1-d6fbe6cc3a47\") " pod="openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.371884 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6f367c6-14a0-4683-aff1-d6fbe6cc3a47-config\") pod \"route-controller-manager-887974f98-g5dnn\" (UID: \"f6f367c6-14a0-4683-aff1-d6fbe6cc3a47\") " pod="openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.376182 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6f367c6-14a0-4683-aff1-d6fbe6cc3a47-serving-cert\") pod \"route-controller-manager-887974f98-g5dnn\" (UID: \"f6f367c6-14a0-4683-aff1-d6fbe6cc3a47\") " pod="openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.389064 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbk4s\" (UniqueName: \"kubernetes.io/projected/f6f367c6-14a0-4683-aff1-d6fbe6cc3a47-kube-api-access-rbk4s\") pod \"route-controller-manager-887974f98-g5dnn\" (UID: \"f6f367c6-14a0-4683-aff1-d6fbe6cc3a47\") " pod="openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.472453 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2ml4\" (UniqueName: \"kubernetes.io/projected/58959cc0-9ad2-48f9-83da-c33525b8919d-kube-api-access-k2ml4\") pod \"controller-manager-6677d585cb-h45gr\" (UID: \"58959cc0-9ad2-48f9-83da-c33525b8919d\") " pod="openshift-controller-manager/controller-manager-6677d585cb-h45gr" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.472544 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/58959cc0-9ad2-48f9-83da-c33525b8919d-proxy-ca-bundles\") pod \"controller-manager-6677d585cb-h45gr\" (UID: \"58959cc0-9ad2-48f9-83da-c33525b8919d\") " pod="openshift-controller-manager/controller-manager-6677d585cb-h45gr" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.472679 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58959cc0-9ad2-48f9-83da-c33525b8919d-config\") pod \"controller-manager-6677d585cb-h45gr\" (UID: \"58959cc0-9ad2-48f9-83da-c33525b8919d\") " pod="openshift-controller-manager/controller-manager-6677d585cb-h45gr" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.472754 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58959cc0-9ad2-48f9-83da-c33525b8919d-client-ca\") pod \"controller-manager-6677d585cb-h45gr\" (UID: \"58959cc0-9ad2-48f9-83da-c33525b8919d\") " pod="openshift-controller-manager/controller-manager-6677d585cb-h45gr" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.472810 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58959cc0-9ad2-48f9-83da-c33525b8919d-serving-cert\") pod \"controller-manager-6677d585cb-h45gr\" (UID: \"58959cc0-9ad2-48f9-83da-c33525b8919d\") " pod="openshift-controller-manager/controller-manager-6677d585cb-h45gr" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.473715 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58959cc0-9ad2-48f9-83da-c33525b8919d-client-ca\") pod \"controller-manager-6677d585cb-h45gr\" (UID: \"58959cc0-9ad2-48f9-83da-c33525b8919d\") " pod="openshift-controller-manager/controller-manager-6677d585cb-h45gr" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.474399 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58959cc0-9ad2-48f9-83da-c33525b8919d-config\") pod \"controller-manager-6677d585cb-h45gr\" (UID: \"58959cc0-9ad2-48f9-83da-c33525b8919d\") " pod="openshift-controller-manager/controller-manager-6677d585cb-h45gr" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.475563 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/58959cc0-9ad2-48f9-83da-c33525b8919d-proxy-ca-bundles\") pod \"controller-manager-6677d585cb-h45gr\" (UID: \"58959cc0-9ad2-48f9-83da-c33525b8919d\") " pod="openshift-controller-manager/controller-manager-6677d585cb-h45gr" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.476627 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58959cc0-9ad2-48f9-83da-c33525b8919d-serving-cert\") pod \"controller-manager-6677d585cb-h45gr\" (UID: \"58959cc0-9ad2-48f9-83da-c33525b8919d\") " pod="openshift-controller-manager/controller-manager-6677d585cb-h45gr" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.488989 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2ml4\" (UniqueName: \"kubernetes.io/projected/58959cc0-9ad2-48f9-83da-c33525b8919d-kube-api-access-k2ml4\") pod \"controller-manager-6677d585cb-h45gr\" (UID: \"58959cc0-9ad2-48f9-83da-c33525b8919d\") " pod="openshift-controller-manager/controller-manager-6677d585cb-h45gr" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.525833 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5855d88fdc-qmhg5" event={"ID":"c4a9957b-61b8-4536-ae91-fe1f1c12575f","Type":"ContainerDied","Data":"85975510d8507912acb9c3c9e341eb489a6bfcf769bf4be7878158b90105c935"} Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.525893 4730 scope.go:117] "RemoveContainer" containerID="272f9abedbf05198cbcea3f3c4c9c1a9c6f254ef663ed65fee9f8391c9210b16" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.525925 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5855d88fdc-qmhg5" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.546196 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b8a480c-69ef-49fc-83a5-4cd052de69f9" path="/var/lib/kubelet/pods/5b8a480c-69ef-49fc-83a5-4cd052de69f9/volumes" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.568917 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5855d88fdc-qmhg5"] Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.572032 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5855d88fdc-qmhg5"] Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.592209 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.617215 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6677d585cb-h45gr" Mar 20 15:45:31 crc kubenswrapper[4730]: I0320 15:45:31.995958 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn"] Mar 20 15:45:32 crc kubenswrapper[4730]: W0320 15:45:32.001764 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6f367c6_14a0_4683_aff1_d6fbe6cc3a47.slice/crio-4655d39dd1465fdb2c59ebf55c060fefff1bcbcd7ef3b0a27cf18646bd27a07f WatchSource:0}: Error finding container 4655d39dd1465fdb2c59ebf55c060fefff1bcbcd7ef3b0a27cf18646bd27a07f: Status 404 returned error can't find the container with id 4655d39dd1465fdb2c59ebf55c060fefff1bcbcd7ef3b0a27cf18646bd27a07f Mar 20 15:45:32 crc kubenswrapper[4730]: I0320 15:45:32.077359 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6677d585cb-h45gr"] Mar 20 15:45:32 crc kubenswrapper[4730]: W0320 15:45:32.090262 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58959cc0_9ad2_48f9_83da_c33525b8919d.slice/crio-192de1853e19e7668e1d1800d2ca4b5c1e7f274d95cbee877dd11e903eda6ae2 WatchSource:0}: Error finding container 192de1853e19e7668e1d1800d2ca4b5c1e7f274d95cbee877dd11e903eda6ae2: Status 404 returned error can't find the container with id 192de1853e19e7668e1d1800d2ca4b5c1e7f274d95cbee877dd11e903eda6ae2 Mar 20 15:45:32 crc kubenswrapper[4730]: I0320 15:45:32.534300 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6677d585cb-h45gr" event={"ID":"58959cc0-9ad2-48f9-83da-c33525b8919d","Type":"ContainerStarted","Data":"79603ae8fc3af2a5353e289038a22a13e7af42a5609045c61b779ef68aeeb5fe"} Mar 20 15:45:32 crc kubenswrapper[4730]: I0320 15:45:32.534350 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6677d585cb-h45gr" event={"ID":"58959cc0-9ad2-48f9-83da-c33525b8919d","Type":"ContainerStarted","Data":"192de1853e19e7668e1d1800d2ca4b5c1e7f274d95cbee877dd11e903eda6ae2"} Mar 20 15:45:32 crc kubenswrapper[4730]: I0320 15:45:32.534469 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6677d585cb-h45gr" Mar 20 15:45:32 crc kubenswrapper[4730]: I0320 15:45:32.535592 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn" event={"ID":"f6f367c6-14a0-4683-aff1-d6fbe6cc3a47","Type":"ContainerStarted","Data":"093154fad0b7d76c4d87e1557430d65a3241a4160f21ee3af7aa8053222b0da7"} Mar 20 15:45:32 crc kubenswrapper[4730]: I0320 15:45:32.535615 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn" event={"ID":"f6f367c6-14a0-4683-aff1-d6fbe6cc3a47","Type":"ContainerStarted","Data":"4655d39dd1465fdb2c59ebf55c060fefff1bcbcd7ef3b0a27cf18646bd27a07f"} Mar 20 15:45:32 crc kubenswrapper[4730]: I0320 15:45:32.535802 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn" Mar 20 15:45:32 crc kubenswrapper[4730]: I0320 15:45:32.540869 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6677d585cb-h45gr" Mar 20 15:45:32 crc kubenswrapper[4730]: I0320 15:45:32.575740 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6677d585cb-h45gr" podStartSLOduration=3.57571926 podStartE2EDuration="3.57571926s" podCreationTimestamp="2026-03-20 15:45:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:45:32.555798633 +0000 UTC m=+391.769170002" watchObservedRunningTime="2026-03-20 15:45:32.57571926 +0000 UTC m=+391.789090629" Mar 20 15:45:32 crc kubenswrapper[4730]: I0320 15:45:32.576467 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn" podStartSLOduration=3.576460313 podStartE2EDuration="3.576460313s" podCreationTimestamp="2026-03-20 15:45:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:45:32.571946695 +0000 UTC m=+391.785318074" watchObservedRunningTime="2026-03-20 15:45:32.576460313 +0000 UTC m=+391.789831682" Mar 20 15:45:32 crc kubenswrapper[4730]: I0320 15:45:32.813353 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn" Mar 20 15:45:33 crc kubenswrapper[4730]: I0320 15:45:33.550880 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4a9957b-61b8-4536-ae91-fe1f1c12575f" path="/var/lib/kubelet/pods/c4a9957b-61b8-4536-ae91-fe1f1c12575f/volumes" Mar 20 15:45:48 crc kubenswrapper[4730]: I0320 15:45:48.592705 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qmxvf"] Mar 20 15:45:48 crc kubenswrapper[4730]: I0320 15:45:48.593581 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qmxvf" podUID="ab6c90a0-1bc1-476d-8526-d1fe438163e3" containerName="registry-server" containerID="cri-o://9a52c050b4986758df8c76456386e12221d78e4ea6fa2b1c10d15807ad001b19" gracePeriod=2 Mar 20 15:45:49 crc kubenswrapper[4730]: I0320 15:45:49.632851 4730 generic.go:334] "Generic (PLEG): container finished" podID="ab6c90a0-1bc1-476d-8526-d1fe438163e3" containerID="9a52c050b4986758df8c76456386e12221d78e4ea6fa2b1c10d15807ad001b19" exitCode=0 Mar 20 15:45:49 crc kubenswrapper[4730]: I0320 15:45:49.633015 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmxvf" event={"ID":"ab6c90a0-1bc1-476d-8526-d1fe438163e3","Type":"ContainerDied","Data":"9a52c050b4986758df8c76456386e12221d78e4ea6fa2b1c10d15807ad001b19"} Mar 20 15:45:49 crc kubenswrapper[4730]: I0320 15:45:49.777171 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qmxvf" Mar 20 15:45:49 crc kubenswrapper[4730]: I0320 15:45:49.922964 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c86g2\" (UniqueName: \"kubernetes.io/projected/ab6c90a0-1bc1-476d-8526-d1fe438163e3-kube-api-access-c86g2\") pod \"ab6c90a0-1bc1-476d-8526-d1fe438163e3\" (UID: \"ab6c90a0-1bc1-476d-8526-d1fe438163e3\") " Mar 20 15:45:49 crc kubenswrapper[4730]: I0320 15:45:49.923114 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab6c90a0-1bc1-476d-8526-d1fe438163e3-utilities\") pod \"ab6c90a0-1bc1-476d-8526-d1fe438163e3\" (UID: \"ab6c90a0-1bc1-476d-8526-d1fe438163e3\") " Mar 20 15:45:49 crc kubenswrapper[4730]: I0320 15:45:49.923172 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab6c90a0-1bc1-476d-8526-d1fe438163e3-catalog-content\") pod \"ab6c90a0-1bc1-476d-8526-d1fe438163e3\" (UID: \"ab6c90a0-1bc1-476d-8526-d1fe438163e3\") " Mar 20 15:45:49 crc kubenswrapper[4730]: I0320 15:45:49.924185 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab6c90a0-1bc1-476d-8526-d1fe438163e3-utilities" (OuterVolumeSpecName: "utilities") pod "ab6c90a0-1bc1-476d-8526-d1fe438163e3" (UID: "ab6c90a0-1bc1-476d-8526-d1fe438163e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:45:49 crc kubenswrapper[4730]: I0320 15:45:49.932975 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab6c90a0-1bc1-476d-8526-d1fe438163e3-kube-api-access-c86g2" (OuterVolumeSpecName: "kube-api-access-c86g2") pod "ab6c90a0-1bc1-476d-8526-d1fe438163e3" (UID: "ab6c90a0-1bc1-476d-8526-d1fe438163e3"). InnerVolumeSpecName "kube-api-access-c86g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:45:50 crc kubenswrapper[4730]: I0320 15:45:50.024314 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab6c90a0-1bc1-476d-8526-d1fe438163e3-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 15:45:50 crc kubenswrapper[4730]: I0320 15:45:50.024375 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c86g2\" (UniqueName: \"kubernetes.io/projected/ab6c90a0-1bc1-476d-8526-d1fe438163e3-kube-api-access-c86g2\") on node \"crc\" DevicePath \"\"" Mar 20 15:45:50 crc kubenswrapper[4730]: I0320 15:45:50.043558 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab6c90a0-1bc1-476d-8526-d1fe438163e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab6c90a0-1bc1-476d-8526-d1fe438163e3" (UID: "ab6c90a0-1bc1-476d-8526-d1fe438163e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:45:50 crc kubenswrapper[4730]: I0320 15:45:50.125612 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab6c90a0-1bc1-476d-8526-d1fe438163e3-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 15:45:50 crc kubenswrapper[4730]: I0320 15:45:50.641401 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmxvf" event={"ID":"ab6c90a0-1bc1-476d-8526-d1fe438163e3","Type":"ContainerDied","Data":"3663f63ee1b2ff284c18b50ae774d9b77f54d7f929ad2803b36f2f39057f8d54"} Mar 20 15:45:50 crc kubenswrapper[4730]: I0320 15:45:50.641787 4730 scope.go:117] "RemoveContainer" containerID="9a52c050b4986758df8c76456386e12221d78e4ea6fa2b1c10d15807ad001b19" Mar 20 15:45:50 crc kubenswrapper[4730]: I0320 15:45:50.641508 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qmxvf" Mar 20 15:45:50 crc kubenswrapper[4730]: I0320 15:45:50.658793 4730 scope.go:117] "RemoveContainer" containerID="1d25cea84c8b33aa09a01d2a67ef03e54e2640ab453060480220fbbf97ebde61" Mar 20 15:45:50 crc kubenswrapper[4730]: I0320 15:45:50.675697 4730 scope.go:117] "RemoveContainer" containerID="6a84b3881231514a341a3dd596d04f1b46d9c8c10a2296046ee0ddb6c55675a3" Mar 20 15:45:50 crc kubenswrapper[4730]: I0320 15:45:50.707864 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qmxvf"] Mar 20 15:45:50 crc kubenswrapper[4730]: I0320 15:45:50.710981 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qmxvf"] Mar 20 15:45:51 crc kubenswrapper[4730]: I0320 15:45:51.539620 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab6c90a0-1bc1-476d-8526-d1fe438163e3" path="/var/lib/kubelet/pods/ab6c90a0-1bc1-476d-8526-d1fe438163e3/volumes" Mar 20 15:46:00 crc kubenswrapper[4730]: I0320 15:46:00.170895 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567026-x7pgl"] Mar 20 15:46:00 crc kubenswrapper[4730]: E0320 15:46:00.171682 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab6c90a0-1bc1-476d-8526-d1fe438163e3" containerName="extract-content" Mar 20 15:46:00 crc kubenswrapper[4730]: I0320 15:46:00.171699 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab6c90a0-1bc1-476d-8526-d1fe438163e3" containerName="extract-content" Mar 20 15:46:00 crc kubenswrapper[4730]: E0320 15:46:00.171709 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab6c90a0-1bc1-476d-8526-d1fe438163e3" containerName="registry-server" Mar 20 15:46:00 crc kubenswrapper[4730]: I0320 15:46:00.171716 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab6c90a0-1bc1-476d-8526-d1fe438163e3" containerName="registry-server" Mar 20 15:46:00 crc kubenswrapper[4730]: E0320 15:46:00.171729 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab6c90a0-1bc1-476d-8526-d1fe438163e3" containerName="extract-utilities" Mar 20 15:46:00 crc kubenswrapper[4730]: I0320 15:46:00.171737 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab6c90a0-1bc1-476d-8526-d1fe438163e3" containerName="extract-utilities" Mar 20 15:46:00 crc kubenswrapper[4730]: I0320 15:46:00.171850 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab6c90a0-1bc1-476d-8526-d1fe438163e3" containerName="registry-server" Mar 20 15:46:00 crc kubenswrapper[4730]: I0320 15:46:00.172290 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567026-x7pgl" Mar 20 15:46:00 crc kubenswrapper[4730]: I0320 15:46:00.174405 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 15:46:00 crc kubenswrapper[4730]: I0320 15:46:00.174742 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 15:46:00 crc kubenswrapper[4730]: I0320 15:46:00.179447 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567026-x7pgl"] Mar 20 15:46:00 crc kubenswrapper[4730]: I0320 15:46:00.191683 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 15:46:00 crc kubenswrapper[4730]: I0320 15:46:00.284067 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqprb\" (UniqueName: \"kubernetes.io/projected/2acd72a0-988c-4c58-a7b4-c139ee0f6ef1-kube-api-access-vqprb\") pod \"auto-csr-approver-29567026-x7pgl\" (UID: \"2acd72a0-988c-4c58-a7b4-c139ee0f6ef1\") " pod="openshift-infra/auto-csr-approver-29567026-x7pgl" Mar 20 15:46:00 crc kubenswrapper[4730]: I0320 15:46:00.385413 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqprb\" (UniqueName: \"kubernetes.io/projected/2acd72a0-988c-4c58-a7b4-c139ee0f6ef1-kube-api-access-vqprb\") pod \"auto-csr-approver-29567026-x7pgl\" (UID: \"2acd72a0-988c-4c58-a7b4-c139ee0f6ef1\") " pod="openshift-infra/auto-csr-approver-29567026-x7pgl" Mar 20 15:46:00 crc kubenswrapper[4730]: I0320 15:46:00.403979 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqprb\" (UniqueName: \"kubernetes.io/projected/2acd72a0-988c-4c58-a7b4-c139ee0f6ef1-kube-api-access-vqprb\") pod \"auto-csr-approver-29567026-x7pgl\" (UID: \"2acd72a0-988c-4c58-a7b4-c139ee0f6ef1\") " pod="openshift-infra/auto-csr-approver-29567026-x7pgl" Mar 20 15:46:00 crc kubenswrapper[4730]: I0320 15:46:00.495689 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567026-x7pgl" Mar 20 15:46:00 crc kubenswrapper[4730]: I0320 15:46:00.900358 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567026-x7pgl"] Mar 20 15:46:00 crc kubenswrapper[4730]: W0320 15:46:00.904424 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2acd72a0_988c_4c58_a7b4_c139ee0f6ef1.slice/crio-09c0170823fe5be12229dc19d56bbd7300e9039e431e3bc9f49d576558f3e4ad WatchSource:0}: Error finding container 09c0170823fe5be12229dc19d56bbd7300e9039e431e3bc9f49d576558f3e4ad: Status 404 returned error can't find the container with id 09c0170823fe5be12229dc19d56bbd7300e9039e431e3bc9f49d576558f3e4ad Mar 20 15:46:01 crc kubenswrapper[4730]: I0320 15:46:01.696561 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567026-x7pgl" event={"ID":"2acd72a0-988c-4c58-a7b4-c139ee0f6ef1","Type":"ContainerStarted","Data":"09c0170823fe5be12229dc19d56bbd7300e9039e431e3bc9f49d576558f3e4ad"} Mar 20 15:46:02 crc kubenswrapper[4730]: I0320 15:46:02.702719 4730 generic.go:334] "Generic (PLEG): container finished" podID="2acd72a0-988c-4c58-a7b4-c139ee0f6ef1" containerID="b5ebe6b01434979e266e3872ff5405b028a732d1dd5830a3d6f3ad270518946a" exitCode=0 Mar 20 15:46:02 crc kubenswrapper[4730]: I0320 15:46:02.702971 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567026-x7pgl" event={"ID":"2acd72a0-988c-4c58-a7b4-c139ee0f6ef1","Type":"ContainerDied","Data":"b5ebe6b01434979e266e3872ff5405b028a732d1dd5830a3d6f3ad270518946a"} Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.008834 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567026-x7pgl" Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.119309 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6677d585cb-h45gr"] Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.122304 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6677d585cb-h45gr" podUID="58959cc0-9ad2-48f9-83da-c33525b8919d" containerName="controller-manager" containerID="cri-o://79603ae8fc3af2a5353e289038a22a13e7af42a5609045c61b779ef68aeeb5fe" gracePeriod=30 Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.128582 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqprb\" (UniqueName: \"kubernetes.io/projected/2acd72a0-988c-4c58-a7b4-c139ee0f6ef1-kube-api-access-vqprb\") pod \"2acd72a0-988c-4c58-a7b4-c139ee0f6ef1\" (UID: \"2acd72a0-988c-4c58-a7b4-c139ee0f6ef1\") " Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.136552 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2acd72a0-988c-4c58-a7b4-c139ee0f6ef1-kube-api-access-vqprb" (OuterVolumeSpecName: "kube-api-access-vqprb") pod "2acd72a0-988c-4c58-a7b4-c139ee0f6ef1" (UID: "2acd72a0-988c-4c58-a7b4-c139ee0f6ef1"). InnerVolumeSpecName "kube-api-access-vqprb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.230793 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqprb\" (UniqueName: \"kubernetes.io/projected/2acd72a0-988c-4c58-a7b4-c139ee0f6ef1-kube-api-access-vqprb\") on node \"crc\" DevicePath \"\"" Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.591814 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6677d585cb-h45gr" Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.711356 4730 generic.go:334] "Generic (PLEG): container finished" podID="58959cc0-9ad2-48f9-83da-c33525b8919d" containerID="79603ae8fc3af2a5353e289038a22a13e7af42a5609045c61b779ef68aeeb5fe" exitCode=0 Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.711425 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6677d585cb-h45gr" event={"ID":"58959cc0-9ad2-48f9-83da-c33525b8919d","Type":"ContainerDied","Data":"79603ae8fc3af2a5353e289038a22a13e7af42a5609045c61b779ef68aeeb5fe"} Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.711431 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6677d585cb-h45gr" Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.711452 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6677d585cb-h45gr" event={"ID":"58959cc0-9ad2-48f9-83da-c33525b8919d","Type":"ContainerDied","Data":"192de1853e19e7668e1d1800d2ca4b5c1e7f274d95cbee877dd11e903eda6ae2"} Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.711470 4730 scope.go:117] "RemoveContainer" containerID="79603ae8fc3af2a5353e289038a22a13e7af42a5609045c61b779ef68aeeb5fe" Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.713163 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567026-x7pgl" event={"ID":"2acd72a0-988c-4c58-a7b4-c139ee0f6ef1","Type":"ContainerDied","Data":"09c0170823fe5be12229dc19d56bbd7300e9039e431e3bc9f49d576558f3e4ad"} Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.713218 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09c0170823fe5be12229dc19d56bbd7300e9039e431e3bc9f49d576558f3e4ad" Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.713281 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567026-x7pgl" Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.724531 4730 scope.go:117] "RemoveContainer" containerID="79603ae8fc3af2a5353e289038a22a13e7af42a5609045c61b779ef68aeeb5fe" Mar 20 15:46:04 crc kubenswrapper[4730]: E0320 15:46:04.725000 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79603ae8fc3af2a5353e289038a22a13e7af42a5609045c61b779ef68aeeb5fe\": container with ID starting with 79603ae8fc3af2a5353e289038a22a13e7af42a5609045c61b779ef68aeeb5fe not found: ID does not exist" containerID="79603ae8fc3af2a5353e289038a22a13e7af42a5609045c61b779ef68aeeb5fe" Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.725036 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79603ae8fc3af2a5353e289038a22a13e7af42a5609045c61b779ef68aeeb5fe"} err="failed to get container status \"79603ae8fc3af2a5353e289038a22a13e7af42a5609045c61b779ef68aeeb5fe\": rpc error: code = NotFound desc = could not find container \"79603ae8fc3af2a5353e289038a22a13e7af42a5609045c61b779ef68aeeb5fe\": container with ID starting with 79603ae8fc3af2a5353e289038a22a13e7af42a5609045c61b779ef68aeeb5fe not found: ID does not exist" Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.736361 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2ml4\" (UniqueName: \"kubernetes.io/projected/58959cc0-9ad2-48f9-83da-c33525b8919d-kube-api-access-k2ml4\") pod \"58959cc0-9ad2-48f9-83da-c33525b8919d\" (UID: \"58959cc0-9ad2-48f9-83da-c33525b8919d\") " Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.736408 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58959cc0-9ad2-48f9-83da-c33525b8919d-serving-cert\") pod \"58959cc0-9ad2-48f9-83da-c33525b8919d\" (UID: \"58959cc0-9ad2-48f9-83da-c33525b8919d\") " Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.736451 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58959cc0-9ad2-48f9-83da-c33525b8919d-config\") pod \"58959cc0-9ad2-48f9-83da-c33525b8919d\" (UID: \"58959cc0-9ad2-48f9-83da-c33525b8919d\") " Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.736513 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58959cc0-9ad2-48f9-83da-c33525b8919d-client-ca\") pod \"58959cc0-9ad2-48f9-83da-c33525b8919d\" (UID: \"58959cc0-9ad2-48f9-83da-c33525b8919d\") " Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.736536 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/58959cc0-9ad2-48f9-83da-c33525b8919d-proxy-ca-bundles\") pod \"58959cc0-9ad2-48f9-83da-c33525b8919d\" (UID: \"58959cc0-9ad2-48f9-83da-c33525b8919d\") " Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.737309 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58959cc0-9ad2-48f9-83da-c33525b8919d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "58959cc0-9ad2-48f9-83da-c33525b8919d" (UID: "58959cc0-9ad2-48f9-83da-c33525b8919d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.737321 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58959cc0-9ad2-48f9-83da-c33525b8919d-config" (OuterVolumeSpecName: "config") pod "58959cc0-9ad2-48f9-83da-c33525b8919d" (UID: "58959cc0-9ad2-48f9-83da-c33525b8919d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.737605 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58959cc0-9ad2-48f9-83da-c33525b8919d-client-ca" (OuterVolumeSpecName: "client-ca") pod "58959cc0-9ad2-48f9-83da-c33525b8919d" (UID: "58959cc0-9ad2-48f9-83da-c33525b8919d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.739203 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58959cc0-9ad2-48f9-83da-c33525b8919d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "58959cc0-9ad2-48f9-83da-c33525b8919d" (UID: "58959cc0-9ad2-48f9-83da-c33525b8919d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.739346 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58959cc0-9ad2-48f9-83da-c33525b8919d-kube-api-access-k2ml4" (OuterVolumeSpecName: "kube-api-access-k2ml4") pod "58959cc0-9ad2-48f9-83da-c33525b8919d" (UID: "58959cc0-9ad2-48f9-83da-c33525b8919d"). InnerVolumeSpecName "kube-api-access-k2ml4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.837912 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2ml4\" (UniqueName: \"kubernetes.io/projected/58959cc0-9ad2-48f9-83da-c33525b8919d-kube-api-access-k2ml4\") on node \"crc\" DevicePath \"\"" Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.837961 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58959cc0-9ad2-48f9-83da-c33525b8919d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.837980 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58959cc0-9ad2-48f9-83da-c33525b8919d-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.837998 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58959cc0-9ad2-48f9-83da-c33525b8919d-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:46:04 crc kubenswrapper[4730]: I0320 15:46:04.838014 4730 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/58959cc0-9ad2-48f9-83da-c33525b8919d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.038466 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6677d585cb-h45gr"] Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.043412 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6677d585cb-h45gr"] Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.288451 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5855d88fdc-jtgvw"] Mar 20 15:46:05 crc kubenswrapper[4730]: E0320 15:46:05.288825 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2acd72a0-988c-4c58-a7b4-c139ee0f6ef1" containerName="oc" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.288836 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="2acd72a0-988c-4c58-a7b4-c139ee0f6ef1" containerName="oc" Mar 20 15:46:05 crc kubenswrapper[4730]: E0320 15:46:05.288852 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58959cc0-9ad2-48f9-83da-c33525b8919d" containerName="controller-manager" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.288859 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="58959cc0-9ad2-48f9-83da-c33525b8919d" containerName="controller-manager" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.288950 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="58959cc0-9ad2-48f9-83da-c33525b8919d" containerName="controller-manager" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.288960 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="2acd72a0-988c-4c58-a7b4-c139ee0f6ef1" containerName="oc" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.289293 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5855d88fdc-jtgvw" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.291129 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.291326 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.291949 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.292459 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.292578 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.292719 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.298564 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.302377 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5855d88fdc-jtgvw"] Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.443102 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/337bfa51-7f51-418d-9eb8-2fdd55260cf5-serving-cert\") pod \"controller-manager-5855d88fdc-jtgvw\" (UID: \"337bfa51-7f51-418d-9eb8-2fdd55260cf5\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-jtgvw" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.443224 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wlq8\" (UniqueName: \"kubernetes.io/projected/337bfa51-7f51-418d-9eb8-2fdd55260cf5-kube-api-access-5wlq8\") pod \"controller-manager-5855d88fdc-jtgvw\" (UID: \"337bfa51-7f51-418d-9eb8-2fdd55260cf5\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-jtgvw" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.443270 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/337bfa51-7f51-418d-9eb8-2fdd55260cf5-config\") pod \"controller-manager-5855d88fdc-jtgvw\" (UID: \"337bfa51-7f51-418d-9eb8-2fdd55260cf5\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-jtgvw" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.443298 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/337bfa51-7f51-418d-9eb8-2fdd55260cf5-proxy-ca-bundles\") pod \"controller-manager-5855d88fdc-jtgvw\" (UID: \"337bfa51-7f51-418d-9eb8-2fdd55260cf5\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-jtgvw" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.443343 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/337bfa51-7f51-418d-9eb8-2fdd55260cf5-client-ca\") pod \"controller-manager-5855d88fdc-jtgvw\" (UID: \"337bfa51-7f51-418d-9eb8-2fdd55260cf5\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-jtgvw" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.540871 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58959cc0-9ad2-48f9-83da-c33525b8919d" path="/var/lib/kubelet/pods/58959cc0-9ad2-48f9-83da-c33525b8919d/volumes" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.543812 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/337bfa51-7f51-418d-9eb8-2fdd55260cf5-client-ca\") pod \"controller-manager-5855d88fdc-jtgvw\" (UID: \"337bfa51-7f51-418d-9eb8-2fdd55260cf5\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-jtgvw" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.543875 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/337bfa51-7f51-418d-9eb8-2fdd55260cf5-serving-cert\") pod \"controller-manager-5855d88fdc-jtgvw\" (UID: \"337bfa51-7f51-418d-9eb8-2fdd55260cf5\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-jtgvw" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.544587 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/337bfa51-7f51-418d-9eb8-2fdd55260cf5-client-ca\") pod \"controller-manager-5855d88fdc-jtgvw\" (UID: \"337bfa51-7f51-418d-9eb8-2fdd55260cf5\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-jtgvw" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.544784 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wlq8\" (UniqueName: \"kubernetes.io/projected/337bfa51-7f51-418d-9eb8-2fdd55260cf5-kube-api-access-5wlq8\") pod \"controller-manager-5855d88fdc-jtgvw\" (UID: \"337bfa51-7f51-418d-9eb8-2fdd55260cf5\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-jtgvw" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.544859 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/337bfa51-7f51-418d-9eb8-2fdd55260cf5-config\") pod \"controller-manager-5855d88fdc-jtgvw\" (UID: \"337bfa51-7f51-418d-9eb8-2fdd55260cf5\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-jtgvw" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.544928 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/337bfa51-7f51-418d-9eb8-2fdd55260cf5-proxy-ca-bundles\") pod \"controller-manager-5855d88fdc-jtgvw\" (UID: \"337bfa51-7f51-418d-9eb8-2fdd55260cf5\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-jtgvw" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.545894 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/337bfa51-7f51-418d-9eb8-2fdd55260cf5-proxy-ca-bundles\") pod \"controller-manager-5855d88fdc-jtgvw\" (UID: \"337bfa51-7f51-418d-9eb8-2fdd55260cf5\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-jtgvw" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.546900 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/337bfa51-7f51-418d-9eb8-2fdd55260cf5-config\") pod \"controller-manager-5855d88fdc-jtgvw\" (UID: \"337bfa51-7f51-418d-9eb8-2fdd55260cf5\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-jtgvw" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.551527 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/337bfa51-7f51-418d-9eb8-2fdd55260cf5-serving-cert\") pod \"controller-manager-5855d88fdc-jtgvw\" (UID: \"337bfa51-7f51-418d-9eb8-2fdd55260cf5\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-jtgvw" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.562318 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wlq8\" (UniqueName: \"kubernetes.io/projected/337bfa51-7f51-418d-9eb8-2fdd55260cf5-kube-api-access-5wlq8\") pod \"controller-manager-5855d88fdc-jtgvw\" (UID: \"337bfa51-7f51-418d-9eb8-2fdd55260cf5\") " pod="openshift-controller-manager/controller-manager-5855d88fdc-jtgvw" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.608080 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5855d88fdc-jtgvw" Mar 20 15:46:05 crc kubenswrapper[4730]: I0320 15:46:05.989003 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5855d88fdc-jtgvw"] Mar 20 15:46:06 crc kubenswrapper[4730]: I0320 15:46:06.727939 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5855d88fdc-jtgvw" event={"ID":"337bfa51-7f51-418d-9eb8-2fdd55260cf5","Type":"ContainerStarted","Data":"1399da652fc3160c02ab5708d02305c04eebaa5861159c7d0f94f72f435ddb86"} Mar 20 15:46:06 crc kubenswrapper[4730]: I0320 15:46:06.727978 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5855d88fdc-jtgvw" event={"ID":"337bfa51-7f51-418d-9eb8-2fdd55260cf5","Type":"ContainerStarted","Data":"83b3e0120b9b7984bb2a6622bbeb9fa9048be783e7d0806d9fd1a8e72797a307"} Mar 20 15:46:06 crc kubenswrapper[4730]: I0320 15:46:06.729328 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5855d88fdc-jtgvw" Mar 20 15:46:06 crc kubenswrapper[4730]: I0320 15:46:06.734087 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5855d88fdc-jtgvw" Mar 20 15:46:06 crc kubenswrapper[4730]: I0320 15:46:06.748494 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5855d88fdc-jtgvw" podStartSLOduration=2.748476839 podStartE2EDuration="2.748476839s" podCreationTimestamp="2026-03-20 15:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:46:06.744687035 +0000 UTC m=+425.958058404" watchObservedRunningTime="2026-03-20 15:46:06.748476839 +0000 UTC m=+425.961848208" Mar 20 15:46:15 crc kubenswrapper[4730]: I0320 15:46:15.477411 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mp2s2"] Mar 20 15:46:15 crc kubenswrapper[4730]: I0320 15:46:15.478774 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" Mar 20 15:46:15 crc kubenswrapper[4730]: I0320 15:46:15.499625 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mp2s2"] Mar 20 15:46:15 crc kubenswrapper[4730]: I0320 15:46:15.664587 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cafb408c-0cda-4634-9e71-1d727ff9a7f2-registry-tls\") pod \"image-registry-66df7c8f76-mp2s2\" (UID: \"cafb408c-0cda-4634-9e71-1d727ff9a7f2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" Mar 20 15:46:15 crc kubenswrapper[4730]: I0320 15:46:15.664627 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cafb408c-0cda-4634-9e71-1d727ff9a7f2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mp2s2\" (UID: \"cafb408c-0cda-4634-9e71-1d727ff9a7f2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" Mar 20 15:46:15 crc kubenswrapper[4730]: I0320 15:46:15.664663 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mp2s2\" (UID: \"cafb408c-0cda-4634-9e71-1d727ff9a7f2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" Mar 20 15:46:15 crc kubenswrapper[4730]: I0320 15:46:15.664685 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4pk9\" (UniqueName: \"kubernetes.io/projected/cafb408c-0cda-4634-9e71-1d727ff9a7f2-kube-api-access-p4pk9\") pod \"image-registry-66df7c8f76-mp2s2\" (UID: \"cafb408c-0cda-4634-9e71-1d727ff9a7f2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" Mar 20 15:46:15 crc kubenswrapper[4730]: I0320 15:46:15.664751 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cafb408c-0cda-4634-9e71-1d727ff9a7f2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mp2s2\" (UID: \"cafb408c-0cda-4634-9e71-1d727ff9a7f2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" Mar 20 15:46:15 crc kubenswrapper[4730]: I0320 15:46:15.664767 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cafb408c-0cda-4634-9e71-1d727ff9a7f2-trusted-ca\") pod \"image-registry-66df7c8f76-mp2s2\" (UID: \"cafb408c-0cda-4634-9e71-1d727ff9a7f2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" Mar 20 15:46:15 crc kubenswrapper[4730]: I0320 15:46:15.664813 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cafb408c-0cda-4634-9e71-1d727ff9a7f2-bound-sa-token\") pod \"image-registry-66df7c8f76-mp2s2\" (UID: \"cafb408c-0cda-4634-9e71-1d727ff9a7f2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" Mar 20 15:46:15 crc kubenswrapper[4730]: I0320 15:46:15.664844 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cafb408c-0cda-4634-9e71-1d727ff9a7f2-registry-certificates\") pod \"image-registry-66df7c8f76-mp2s2\" (UID: \"cafb408c-0cda-4634-9e71-1d727ff9a7f2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" Mar 20 15:46:15 crc kubenswrapper[4730]: I0320 15:46:15.682968 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mp2s2\" (UID: \"cafb408c-0cda-4634-9e71-1d727ff9a7f2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" Mar 20 15:46:15 crc kubenswrapper[4730]: I0320 15:46:15.765688 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cafb408c-0cda-4634-9e71-1d727ff9a7f2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mp2s2\" (UID: \"cafb408c-0cda-4634-9e71-1d727ff9a7f2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" Mar 20 15:46:15 crc kubenswrapper[4730]: I0320 15:46:15.765999 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cafb408c-0cda-4634-9e71-1d727ff9a7f2-trusted-ca\") pod \"image-registry-66df7c8f76-mp2s2\" (UID: \"cafb408c-0cda-4634-9e71-1d727ff9a7f2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" Mar 20 15:46:15 crc kubenswrapper[4730]: I0320 15:46:15.766077 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cafb408c-0cda-4634-9e71-1d727ff9a7f2-bound-sa-token\") pod \"image-registry-66df7c8f76-mp2s2\" (UID: \"cafb408c-0cda-4634-9e71-1d727ff9a7f2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" Mar 20 15:46:15 crc kubenswrapper[4730]: I0320 15:46:15.766114 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cafb408c-0cda-4634-9e71-1d727ff9a7f2-registry-certificates\") pod \"image-registry-66df7c8f76-mp2s2\" (UID: \"cafb408c-0cda-4634-9e71-1d727ff9a7f2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" Mar 20 15:46:15 crc kubenswrapper[4730]: I0320 15:46:15.766144 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cafb408c-0cda-4634-9e71-1d727ff9a7f2-registry-tls\") pod \"image-registry-66df7c8f76-mp2s2\" (UID: \"cafb408c-0cda-4634-9e71-1d727ff9a7f2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" Mar 20 15:46:15 crc kubenswrapper[4730]: I0320 15:46:15.766160 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cafb408c-0cda-4634-9e71-1d727ff9a7f2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mp2s2\" (UID: \"cafb408c-0cda-4634-9e71-1d727ff9a7f2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" Mar 20 15:46:15 crc kubenswrapper[4730]: I0320 15:46:15.766183 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4pk9\" (UniqueName: \"kubernetes.io/projected/cafb408c-0cda-4634-9e71-1d727ff9a7f2-kube-api-access-p4pk9\") pod \"image-registry-66df7c8f76-mp2s2\" (UID: \"cafb408c-0cda-4634-9e71-1d727ff9a7f2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" Mar 20 15:46:15 crc kubenswrapper[4730]: I0320 15:46:15.766665 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cafb408c-0cda-4634-9e71-1d727ff9a7f2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mp2s2\" (UID: \"cafb408c-0cda-4634-9e71-1d727ff9a7f2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" Mar 20 15:46:15 crc kubenswrapper[4730]: I0320 15:46:15.767428 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cafb408c-0cda-4634-9e71-1d727ff9a7f2-registry-certificates\") pod \"image-registry-66df7c8f76-mp2s2\" (UID: \"cafb408c-0cda-4634-9e71-1d727ff9a7f2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" Mar 20 15:46:15 crc kubenswrapper[4730]: I0320 15:46:15.767964 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cafb408c-0cda-4634-9e71-1d727ff9a7f2-trusted-ca\") pod \"image-registry-66df7c8f76-mp2s2\" (UID: \"cafb408c-0cda-4634-9e71-1d727ff9a7f2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" Mar 20 15:46:15 crc kubenswrapper[4730]: I0320 15:46:15.772126 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cafb408c-0cda-4634-9e71-1d727ff9a7f2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mp2s2\" (UID: \"cafb408c-0cda-4634-9e71-1d727ff9a7f2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" Mar 20 15:46:15 crc kubenswrapper[4730]: I0320 15:46:15.774773 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cafb408c-0cda-4634-9e71-1d727ff9a7f2-registry-tls\") pod \"image-registry-66df7c8f76-mp2s2\" (UID: \"cafb408c-0cda-4634-9e71-1d727ff9a7f2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" Mar 20 15:46:15 crc kubenswrapper[4730]: I0320 15:46:15.784411 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4pk9\" (UniqueName: \"kubernetes.io/projected/cafb408c-0cda-4634-9e71-1d727ff9a7f2-kube-api-access-p4pk9\") pod \"image-registry-66df7c8f76-mp2s2\" (UID: \"cafb408c-0cda-4634-9e71-1d727ff9a7f2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" Mar 20 15:46:15 crc kubenswrapper[4730]: I0320 15:46:15.784727 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cafb408c-0cda-4634-9e71-1d727ff9a7f2-bound-sa-token\") pod \"image-registry-66df7c8f76-mp2s2\" (UID: \"cafb408c-0cda-4634-9e71-1d727ff9a7f2\") " pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" Mar 20 15:46:15 crc kubenswrapper[4730]: I0320 15:46:15.793783 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" Mar 20 15:46:16 crc kubenswrapper[4730]: I0320 15:46:16.172984 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mp2s2"] Mar 20 15:46:16 crc kubenswrapper[4730]: I0320 15:46:16.775017 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" event={"ID":"cafb408c-0cda-4634-9e71-1d727ff9a7f2","Type":"ContainerStarted","Data":"e14b1c56a67ba8393a624f2b94275d94ad4af04c29bd58a6458288abf9a52d00"} Mar 20 15:46:16 crc kubenswrapper[4730]: I0320 15:46:16.775062 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" event={"ID":"cafb408c-0cda-4634-9e71-1d727ff9a7f2","Type":"ContainerStarted","Data":"6c9dd7c1503e298e4f613eae2ccd4a88dfeecc7a2497efab7bca0fd3df564762"} Mar 20 15:46:16 crc kubenswrapper[4730]: I0320 15:46:16.775151 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" Mar 20 15:46:16 crc kubenswrapper[4730]: I0320 15:46:16.796357 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" podStartSLOduration=1.7963366729999999 podStartE2EDuration="1.796336673s" podCreationTimestamp="2026-03-20 15:46:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:46:16.795684734 +0000 UTC m=+436.009056113" watchObservedRunningTime="2026-03-20 15:46:16.796336673 +0000 UTC m=+436.009708042" Mar 20 15:46:24 crc kubenswrapper[4730]: I0320 15:46:24.145334 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn"] Mar 20 15:46:24 crc kubenswrapper[4730]: I0320 15:46:24.146425 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn" podUID="f6f367c6-14a0-4683-aff1-d6fbe6cc3a47" containerName="route-controller-manager" containerID="cri-o://093154fad0b7d76c4d87e1557430d65a3241a4160f21ee3af7aa8053222b0da7" gracePeriod=30 Mar 20 15:46:24 crc kubenswrapper[4730]: I0320 15:46:24.610985 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn" Mar 20 15:46:24 crc kubenswrapper[4730]: I0320 15:46:24.785174 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbk4s\" (UniqueName: \"kubernetes.io/projected/f6f367c6-14a0-4683-aff1-d6fbe6cc3a47-kube-api-access-rbk4s\") pod \"f6f367c6-14a0-4683-aff1-d6fbe6cc3a47\" (UID: \"f6f367c6-14a0-4683-aff1-d6fbe6cc3a47\") " Mar 20 15:46:24 crc kubenswrapper[4730]: I0320 15:46:24.785280 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6f367c6-14a0-4683-aff1-d6fbe6cc3a47-serving-cert\") pod \"f6f367c6-14a0-4683-aff1-d6fbe6cc3a47\" (UID: \"f6f367c6-14a0-4683-aff1-d6fbe6cc3a47\") " Mar 20 15:46:24 crc kubenswrapper[4730]: I0320 15:46:24.785311 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6f367c6-14a0-4683-aff1-d6fbe6cc3a47-config\") pod \"f6f367c6-14a0-4683-aff1-d6fbe6cc3a47\" (UID: \"f6f367c6-14a0-4683-aff1-d6fbe6cc3a47\") " Mar 20 15:46:24 crc kubenswrapper[4730]: I0320 15:46:24.785372 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6f367c6-14a0-4683-aff1-d6fbe6cc3a47-client-ca\") pod \"f6f367c6-14a0-4683-aff1-d6fbe6cc3a47\" (UID: \"f6f367c6-14a0-4683-aff1-d6fbe6cc3a47\") " Mar 20 15:46:24 crc kubenswrapper[4730]: I0320 15:46:24.786079 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6f367c6-14a0-4683-aff1-d6fbe6cc3a47-config" (OuterVolumeSpecName: "config") pod "f6f367c6-14a0-4683-aff1-d6fbe6cc3a47" (UID: "f6f367c6-14a0-4683-aff1-d6fbe6cc3a47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:46:24 crc kubenswrapper[4730]: I0320 15:46:24.786108 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6f367c6-14a0-4683-aff1-d6fbe6cc3a47-client-ca" (OuterVolumeSpecName: "client-ca") pod "f6f367c6-14a0-4683-aff1-d6fbe6cc3a47" (UID: "f6f367c6-14a0-4683-aff1-d6fbe6cc3a47"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:46:24 crc kubenswrapper[4730]: I0320 15:46:24.792016 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6f367c6-14a0-4683-aff1-d6fbe6cc3a47-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f6f367c6-14a0-4683-aff1-d6fbe6cc3a47" (UID: "f6f367c6-14a0-4683-aff1-d6fbe6cc3a47"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:46:24 crc kubenswrapper[4730]: I0320 15:46:24.793127 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6f367c6-14a0-4683-aff1-d6fbe6cc3a47-kube-api-access-rbk4s" (OuterVolumeSpecName: "kube-api-access-rbk4s") pod "f6f367c6-14a0-4683-aff1-d6fbe6cc3a47" (UID: "f6f367c6-14a0-4683-aff1-d6fbe6cc3a47"). InnerVolumeSpecName "kube-api-access-rbk4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:46:24 crc kubenswrapper[4730]: I0320 15:46:24.815746 4730 generic.go:334] "Generic (PLEG): container finished" podID="f6f367c6-14a0-4683-aff1-d6fbe6cc3a47" containerID="093154fad0b7d76c4d87e1557430d65a3241a4160f21ee3af7aa8053222b0da7" exitCode=0 Mar 20 15:46:24 crc kubenswrapper[4730]: I0320 15:46:24.815822 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn" event={"ID":"f6f367c6-14a0-4683-aff1-d6fbe6cc3a47","Type":"ContainerDied","Data":"093154fad0b7d76c4d87e1557430d65a3241a4160f21ee3af7aa8053222b0da7"} Mar 20 15:46:24 crc kubenswrapper[4730]: I0320 15:46:24.815849 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn" event={"ID":"f6f367c6-14a0-4683-aff1-d6fbe6cc3a47","Type":"ContainerDied","Data":"4655d39dd1465fdb2c59ebf55c060fefff1bcbcd7ef3b0a27cf18646bd27a07f"} Mar 20 15:46:24 crc kubenswrapper[4730]: I0320 15:46:24.815851 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn" Mar 20 15:46:24 crc kubenswrapper[4730]: I0320 15:46:24.815866 4730 scope.go:117] "RemoveContainer" containerID="093154fad0b7d76c4d87e1557430d65a3241a4160f21ee3af7aa8053222b0da7" Mar 20 15:46:24 crc kubenswrapper[4730]: I0320 15:46:24.853871 4730 scope.go:117] "RemoveContainer" containerID="093154fad0b7d76c4d87e1557430d65a3241a4160f21ee3af7aa8053222b0da7" Mar 20 15:46:24 crc kubenswrapper[4730]: E0320 15:46:24.854412 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"093154fad0b7d76c4d87e1557430d65a3241a4160f21ee3af7aa8053222b0da7\": container with ID starting with 093154fad0b7d76c4d87e1557430d65a3241a4160f21ee3af7aa8053222b0da7 not found: ID does not exist" containerID="093154fad0b7d76c4d87e1557430d65a3241a4160f21ee3af7aa8053222b0da7" Mar 20 15:46:24 crc kubenswrapper[4730]: I0320 15:46:24.854454 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"093154fad0b7d76c4d87e1557430d65a3241a4160f21ee3af7aa8053222b0da7"} err="failed to get container status \"093154fad0b7d76c4d87e1557430d65a3241a4160f21ee3af7aa8053222b0da7\": rpc error: code = NotFound desc = could not find container \"093154fad0b7d76c4d87e1557430d65a3241a4160f21ee3af7aa8053222b0da7\": container with ID starting with 093154fad0b7d76c4d87e1557430d65a3241a4160f21ee3af7aa8053222b0da7 not found: ID does not exist" Mar 20 15:46:24 crc kubenswrapper[4730]: I0320 15:46:24.871703 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn"] Mar 20 15:46:24 crc kubenswrapper[4730]: I0320 15:46:24.875059 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-887974f98-g5dnn"] Mar 20 15:46:24 crc kubenswrapper[4730]: I0320 15:46:24.886366 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbk4s\" (UniqueName: \"kubernetes.io/projected/f6f367c6-14a0-4683-aff1-d6fbe6cc3a47-kube-api-access-rbk4s\") on node \"crc\" DevicePath \"\"" Mar 20 15:46:24 crc kubenswrapper[4730]: I0320 15:46:24.886395 4730 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6f367c6-14a0-4683-aff1-d6fbe6cc3a47-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:46:24 crc kubenswrapper[4730]: I0320 15:46:24.886403 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6f367c6-14a0-4683-aff1-d6fbe6cc3a47-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:46:24 crc kubenswrapper[4730]: I0320 15:46:24.886412 4730 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6f367c6-14a0-4683-aff1-d6fbe6cc3a47-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.315534 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79bc85d457-d42jn"] Mar 20 15:46:25 crc kubenswrapper[4730]: E0320 15:46:25.316225 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6f367c6-14a0-4683-aff1-d6fbe6cc3a47" containerName="route-controller-manager" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.316242 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6f367c6-14a0-4683-aff1-d6fbe6cc3a47" containerName="route-controller-manager" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.316480 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6f367c6-14a0-4683-aff1-d6fbe6cc3a47" containerName="route-controller-manager" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.317036 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-d42jn" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.319863 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.321726 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.321992 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.322157 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.322390 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.322733 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.330568 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79bc85d457-d42jn"] Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.494905 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba77c88b-7fd7-4b1b-be2c-9708ac3f766d-client-ca\") pod \"route-controller-manager-79bc85d457-d42jn\" (UID: \"ba77c88b-7fd7-4b1b-be2c-9708ac3f766d\") " pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-d42jn" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.494969 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba77c88b-7fd7-4b1b-be2c-9708ac3f766d-serving-cert\") pod \"route-controller-manager-79bc85d457-d42jn\" (UID: \"ba77c88b-7fd7-4b1b-be2c-9708ac3f766d\") " pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-d42jn" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.494999 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba77c88b-7fd7-4b1b-be2c-9708ac3f766d-config\") pod \"route-controller-manager-79bc85d457-d42jn\" (UID: \"ba77c88b-7fd7-4b1b-be2c-9708ac3f766d\") " pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-d42jn" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.495022 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l4jp\" (UniqueName: \"kubernetes.io/projected/ba77c88b-7fd7-4b1b-be2c-9708ac3f766d-kube-api-access-2l4jp\") pod \"route-controller-manager-79bc85d457-d42jn\" (UID: \"ba77c88b-7fd7-4b1b-be2c-9708ac3f766d\") " pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-d42jn" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.543229 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6f367c6-14a0-4683-aff1-d6fbe6cc3a47" path="/var/lib/kubelet/pods/f6f367c6-14a0-4683-aff1-d6fbe6cc3a47/volumes" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.596194 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l4jp\" (UniqueName: \"kubernetes.io/projected/ba77c88b-7fd7-4b1b-be2c-9708ac3f766d-kube-api-access-2l4jp\") pod \"route-controller-manager-79bc85d457-d42jn\" (UID: \"ba77c88b-7fd7-4b1b-be2c-9708ac3f766d\") " pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-d42jn" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.596356 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba77c88b-7fd7-4b1b-be2c-9708ac3f766d-client-ca\") pod \"route-controller-manager-79bc85d457-d42jn\" (UID: \"ba77c88b-7fd7-4b1b-be2c-9708ac3f766d\") " pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-d42jn" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.596374 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba77c88b-7fd7-4b1b-be2c-9708ac3f766d-serving-cert\") pod \"route-controller-manager-79bc85d457-d42jn\" (UID: \"ba77c88b-7fd7-4b1b-be2c-9708ac3f766d\") " pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-d42jn" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.596398 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba77c88b-7fd7-4b1b-be2c-9708ac3f766d-config\") pod \"route-controller-manager-79bc85d457-d42jn\" (UID: \"ba77c88b-7fd7-4b1b-be2c-9708ac3f766d\") " pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-d42jn" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.597560 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba77c88b-7fd7-4b1b-be2c-9708ac3f766d-config\") pod \"route-controller-manager-79bc85d457-d42jn\" (UID: \"ba77c88b-7fd7-4b1b-be2c-9708ac3f766d\") " pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-d42jn" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.599982 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba77c88b-7fd7-4b1b-be2c-9708ac3f766d-client-ca\") pod \"route-controller-manager-79bc85d457-d42jn\" (UID: \"ba77c88b-7fd7-4b1b-be2c-9708ac3f766d\") " pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-d42jn" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.608827 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba77c88b-7fd7-4b1b-be2c-9708ac3f766d-serving-cert\") pod \"route-controller-manager-79bc85d457-d42jn\" (UID: \"ba77c88b-7fd7-4b1b-be2c-9708ac3f766d\") " pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-d42jn" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.615862 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l4jp\" (UniqueName: \"kubernetes.io/projected/ba77c88b-7fd7-4b1b-be2c-9708ac3f766d-kube-api-access-2l4jp\") pod \"route-controller-manager-79bc85d457-d42jn\" (UID: \"ba77c88b-7fd7-4b1b-be2c-9708ac3f766d\") " pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-d42jn" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.636242 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-d42jn" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.698108 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.698329 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.698370 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.698414 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.700964 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.702855 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.703044 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.703430 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.833638 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.833638 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.833756 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.901774 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs\") pod \"network-metrics-daemon-2prfn\" (UID: \"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\") " pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.906961 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a-metrics-certs\") pod \"network-metrics-daemon-2prfn\" (UID: \"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a\") " pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.936694 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 20 15:46:25 crc kubenswrapper[4730]: I0320 15:46:25.944598 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2prfn" Mar 20 15:46:26 crc kubenswrapper[4730]: I0320 15:46:26.046384 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79bc85d457-d42jn"] Mar 20 15:46:26 crc kubenswrapper[4730]: W0320 15:46:26.277913 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-8f66443105590edc6962d38c8fe805088c5010ec68eee05fcd7144569423c615 WatchSource:0}: Error finding container 8f66443105590edc6962d38c8fe805088c5010ec68eee05fcd7144569423c615: Status 404 returned error can't find the container with id 8f66443105590edc6962d38c8fe805088c5010ec68eee05fcd7144569423c615 Mar 20 15:46:26 crc kubenswrapper[4730]: I0320 15:46:26.366755 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2prfn"] Mar 20 15:46:26 crc kubenswrapper[4730]: I0320 15:46:26.841938 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2prfn" event={"ID":"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a","Type":"ContainerStarted","Data":"042e670d22ec3cd255c89c175c1ae13c5142eb845549769f30fc53f425605ef0"} Mar 20 15:46:26 crc kubenswrapper[4730]: I0320 15:46:26.842328 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2prfn" event={"ID":"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a","Type":"ContainerStarted","Data":"dd95ab3ea3f4a45814d13ed92e8765afd7e941fbc36557369f7c8ce736f5429c"} Mar 20 15:46:26 crc kubenswrapper[4730]: I0320 15:46:26.842346 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2prfn" event={"ID":"db9ee3bf-d96d-4ae8-bb5c-f299469bfc0a","Type":"ContainerStarted","Data":"509bc94cb9394c5da06389965f36ec02d42d5e991cb66ac94831363750ca9f36"} Mar 20 15:46:26 crc kubenswrapper[4730]: I0320 15:46:26.843735 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-d42jn" event={"ID":"ba77c88b-7fd7-4b1b-be2c-9708ac3f766d","Type":"ContainerStarted","Data":"5faf01e0b07690fd70093656deaf9b89f77f92f64bf5fe9609eda77952c15dea"} Mar 20 15:46:26 crc kubenswrapper[4730]: I0320 15:46:26.843782 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-d42jn" event={"ID":"ba77c88b-7fd7-4b1b-be2c-9708ac3f766d","Type":"ContainerStarted","Data":"d18434bb33a755f466d4b04274990f2ecebf84177e2a328df838834244320e7a"} Mar 20 15:46:26 crc kubenswrapper[4730]: I0320 15:46:26.843894 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-d42jn" Mar 20 15:46:26 crc kubenswrapper[4730]: I0320 15:46:26.844895 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"28f49df9dd8811e43a03c84f32b3d00f487e1c46caa9286b20491d1e279e755c"} Mar 20 15:46:26 crc kubenswrapper[4730]: I0320 15:46:26.844934 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c636efc01d0c49b1ce86f53c0064c221598c4bb08e67586d3f9ae1e2618f89f0"} Mar 20 15:46:26 crc kubenswrapper[4730]: I0320 15:46:26.846074 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"80e9abf5f5b71f4dc7d897ab2c626cb54651f486709ea593fe5e4e2481bf3824"} Mar 20 15:46:26 crc kubenswrapper[4730]: I0320 15:46:26.846107 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"153e9d4ad565cf26447b51dae0425e7dee85181ab21dc23255dab338040c1560"} Mar 20 15:46:26 crc kubenswrapper[4730]: I0320 15:46:26.847671 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"16b135ea8083cb8cb5ba34ee60d710019169f125d3c216aacd5dce21f8be39d9"} Mar 20 15:46:26 crc kubenswrapper[4730]: I0320 15:46:26.847717 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"8f66443105590edc6962d38c8fe805088c5010ec68eee05fcd7144569423c615"} Mar 20 15:46:26 crc kubenswrapper[4730]: I0320 15:46:26.847907 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:46:26 crc kubenswrapper[4730]: I0320 15:46:26.849026 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-d42jn" Mar 20 15:46:26 crc kubenswrapper[4730]: I0320 15:46:26.874327 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-2prfn" podStartSLOduration=394.874306495 podStartE2EDuration="6m34.874306495s" podCreationTimestamp="2026-03-20 15:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:46:26.858050026 +0000 UTC m=+446.071421395" watchObservedRunningTime="2026-03-20 15:46:26.874306495 +0000 UTC m=+446.087677874" Mar 20 15:46:26 crc kubenswrapper[4730]: I0320 15:46:26.908294 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-79bc85d457-d42jn" podStartSLOduration=2.908271127 podStartE2EDuration="2.908271127s" podCreationTimestamp="2026-03-20 15:46:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:46:26.904181684 +0000 UTC m=+446.117553073" watchObservedRunningTime="2026-03-20 15:46:26.908271127 +0000 UTC m=+446.121642496" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.107215 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rlnqc"] Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.108044 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rlnqc" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" containerName="registry-server" containerID="cri-o://3303b366b010494b00cff91f0adf58b15d0be7946981888a990192d9cd69b3fa" gracePeriod=30 Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.114498 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mbtfk"] Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.114744 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mbtfk" podUID="d5addb8e-1dbc-41a2-8330-8a97251bd52f" containerName="registry-server" containerID="cri-o://b0b7cf1aa8683df6582d7fa32a0ee12665587f8843d63db7cc45648643eb352c" gracePeriod=30 Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.118228 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-klbh8"] Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.118432 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-klbh8" podUID="e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3" containerName="marketplace-operator" containerID="cri-o://753a6699245a34bcfcd2383bf2298b09146bcf7e23ec3ecf85f51c516941cd06" gracePeriod=30 Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.133312 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-flpw2"] Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.133617 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-flpw2" podUID="5a347883-e4f7-4fcd-8920-59519533cf43" containerName="registry-server" containerID="cri-o://8a137f491e123e26bfe8e53249675fb8ec9405c7b00ec70ee4673e9a88e5d6bf" gracePeriod=30 Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.146105 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b842f"] Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.147128 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b842f" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.152033 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8rptq"] Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.152342 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8rptq" podUID="558b00fd-2589-4842-8cba-db0cffe8c826" containerName="registry-server" containerID="cri-o://be8ad8b5a0faeab783b7ccafb3517fc045687f3f5ccf91534d7b0f3ee31c621e" gracePeriod=30 Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.155566 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b842f"] Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.270424 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b3eaa81f-92a9-49fa-aca0-1e8e35920f20-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b842f\" (UID: \"b3eaa81f-92a9-49fa-aca0-1e8e35920f20\") " pod="openshift-marketplace/marketplace-operator-79b997595-b842f" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.270468 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdwqt\" (UniqueName: \"kubernetes.io/projected/b3eaa81f-92a9-49fa-aca0-1e8e35920f20-kube-api-access-jdwqt\") pod \"marketplace-operator-79b997595-b842f\" (UID: \"b3eaa81f-92a9-49fa-aca0-1e8e35920f20\") " pod="openshift-marketplace/marketplace-operator-79b997595-b842f" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.270518 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3eaa81f-92a9-49fa-aca0-1e8e35920f20-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b842f\" (UID: \"b3eaa81f-92a9-49fa-aca0-1e8e35920f20\") " pod="openshift-marketplace/marketplace-operator-79b997595-b842f" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.371224 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b3eaa81f-92a9-49fa-aca0-1e8e35920f20-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b842f\" (UID: \"b3eaa81f-92a9-49fa-aca0-1e8e35920f20\") " pod="openshift-marketplace/marketplace-operator-79b997595-b842f" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.371283 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdwqt\" (UniqueName: \"kubernetes.io/projected/b3eaa81f-92a9-49fa-aca0-1e8e35920f20-kube-api-access-jdwqt\") pod \"marketplace-operator-79b997595-b842f\" (UID: \"b3eaa81f-92a9-49fa-aca0-1e8e35920f20\") " pod="openshift-marketplace/marketplace-operator-79b997595-b842f" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.374948 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3eaa81f-92a9-49fa-aca0-1e8e35920f20-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b842f\" (UID: \"b3eaa81f-92a9-49fa-aca0-1e8e35920f20\") " pod="openshift-marketplace/marketplace-operator-79b997595-b842f" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.376109 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3eaa81f-92a9-49fa-aca0-1e8e35920f20-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b842f\" (UID: \"b3eaa81f-92a9-49fa-aca0-1e8e35920f20\") " pod="openshift-marketplace/marketplace-operator-79b997595-b842f" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.376680 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b3eaa81f-92a9-49fa-aca0-1e8e35920f20-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b842f\" (UID: \"b3eaa81f-92a9-49fa-aca0-1e8e35920f20\") " pod="openshift-marketplace/marketplace-operator-79b997595-b842f" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.386386 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdwqt\" (UniqueName: \"kubernetes.io/projected/b3eaa81f-92a9-49fa-aca0-1e8e35920f20-kube-api-access-jdwqt\") pod \"marketplace-operator-79b997595-b842f\" (UID: \"b3eaa81f-92a9-49fa-aca0-1e8e35920f20\") " pod="openshift-marketplace/marketplace-operator-79b997595-b842f" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.470652 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b842f" Mar 20 15:46:30 crc kubenswrapper[4730]: E0320 15:46:30.496770 4730 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be8ad8b5a0faeab783b7ccafb3517fc045687f3f5ccf91534d7b0f3ee31c621e is running failed: container process not found" containerID="be8ad8b5a0faeab783b7ccafb3517fc045687f3f5ccf91534d7b0f3ee31c621e" cmd=["grpc_health_probe","-addr=:50051"] Mar 20 15:46:30 crc kubenswrapper[4730]: E0320 15:46:30.497481 4730 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be8ad8b5a0faeab783b7ccafb3517fc045687f3f5ccf91534d7b0f3ee31c621e is running failed: container process not found" containerID="be8ad8b5a0faeab783b7ccafb3517fc045687f3f5ccf91534d7b0f3ee31c621e" cmd=["grpc_health_probe","-addr=:50051"] Mar 20 15:46:30 crc kubenswrapper[4730]: E0320 15:46:30.497827 4730 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be8ad8b5a0faeab783b7ccafb3517fc045687f3f5ccf91534d7b0f3ee31c621e is running failed: container process not found" containerID="be8ad8b5a0faeab783b7ccafb3517fc045687f3f5ccf91534d7b0f3ee31c621e" cmd=["grpc_health_probe","-addr=:50051"] Mar 20 15:46:30 crc kubenswrapper[4730]: E0320 15:46:30.497859 4730 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of be8ad8b5a0faeab783b7ccafb3517fc045687f3f5ccf91534d7b0f3ee31c621e is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-8rptq" podUID="558b00fd-2589-4842-8cba-db0cffe8c826" containerName="registry-server" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.713765 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-klbh8" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.780210 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3-marketplace-operator-metrics\") pod \"e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3\" (UID: \"e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3\") " Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.780282 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxflj\" (UniqueName: \"kubernetes.io/projected/e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3-kube-api-access-jxflj\") pod \"e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3\" (UID: \"e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3\") " Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.780335 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3-marketplace-trusted-ca\") pod \"e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3\" (UID: \"e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3\") " Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.780996 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3" (UID: "e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.784642 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3" (UID: "e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.785235 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3-kube-api-access-jxflj" (OuterVolumeSpecName: "kube-api-access-jxflj") pod "e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3" (UID: "e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3"). InnerVolumeSpecName "kube-api-access-jxflj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.813007 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rlnqc" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.871494 4730 generic.go:334] "Generic (PLEG): container finished" podID="e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3" containerID="753a6699245a34bcfcd2383bf2298b09146bcf7e23ec3ecf85f51c516941cd06" exitCode=0 Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.871566 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-klbh8" event={"ID":"e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3","Type":"ContainerDied","Data":"753a6699245a34bcfcd2383bf2298b09146bcf7e23ec3ecf85f51c516941cd06"} Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.871602 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-klbh8" event={"ID":"e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3","Type":"ContainerDied","Data":"759bd2ea27103b987add0fca450b8a256b74867157aa94299acbb52889decc8f"} Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.871623 4730 scope.go:117] "RemoveContainer" containerID="753a6699245a34bcfcd2383bf2298b09146bcf7e23ec3ecf85f51c516941cd06" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.871717 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-klbh8" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.876809 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rptq" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.878692 4730 generic.go:334] "Generic (PLEG): container finished" podID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" containerID="3303b366b010494b00cff91f0adf58b15d0be7946981888a990192d9cd69b3fa" exitCode=0 Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.878744 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlnqc" event={"ID":"e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98","Type":"ContainerDied","Data":"3303b366b010494b00cff91f0adf58b15d0be7946981888a990192d9cd69b3fa"} Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.878763 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlnqc" event={"ID":"e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98","Type":"ContainerDied","Data":"3040ee2962d5bbb55e0748bc4e87b0b5953d222ba1e8e8b464bf1b4d1408cbfd"} Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.878817 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rlnqc" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.880860 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgx8f\" (UniqueName: \"kubernetes.io/projected/558b00fd-2589-4842-8cba-db0cffe8c826-kube-api-access-kgx8f\") pod \"558b00fd-2589-4842-8cba-db0cffe8c826\" (UID: \"558b00fd-2589-4842-8cba-db0cffe8c826\") " Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.880914 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98-utilities\") pod \"e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98\" (UID: \"e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98\") " Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.880938 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558b00fd-2589-4842-8cba-db0cffe8c826-catalog-content\") pod \"558b00fd-2589-4842-8cba-db0cffe8c826\" (UID: \"558b00fd-2589-4842-8cba-db0cffe8c826\") " Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.880975 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgmhp\" (UniqueName: \"kubernetes.io/projected/e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98-kube-api-access-rgmhp\") pod \"e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98\" (UID: \"e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98\") " Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.881181 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mbtfk" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.881534 4730 generic.go:334] "Generic (PLEG): container finished" podID="558b00fd-2589-4842-8cba-db0cffe8c826" containerID="be8ad8b5a0faeab783b7ccafb3517fc045687f3f5ccf91534d7b0f3ee31c621e" exitCode=0 Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.881578 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rptq" event={"ID":"558b00fd-2589-4842-8cba-db0cffe8c826","Type":"ContainerDied","Data":"be8ad8b5a0faeab783b7ccafb3517fc045687f3f5ccf91534d7b0f3ee31c621e"} Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.881599 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rptq" event={"ID":"558b00fd-2589-4842-8cba-db0cffe8c826","Type":"ContainerDied","Data":"8b528dc3e6323a70e8b05e1cb0a0d95967e9a6d57d83e5d00d37458aa2621e38"} Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.881722 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rptq" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.882219 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98-utilities" (OuterVolumeSpecName: "utilities") pod "e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" (UID: "e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.882377 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558b00fd-2589-4842-8cba-db0cffe8c826-utilities\") pod \"558b00fd-2589-4842-8cba-db0cffe8c826\" (UID: \"558b00fd-2589-4842-8cba-db0cffe8c826\") " Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.882439 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98-catalog-content\") pod \"e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98\" (UID: \"e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98\") " Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.883962 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/558b00fd-2589-4842-8cba-db0cffe8c826-utilities" (OuterVolumeSpecName: "utilities") pod "558b00fd-2589-4842-8cba-db0cffe8c826" (UID: "558b00fd-2589-4842-8cba-db0cffe8c826"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.884097 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98-kube-api-access-rgmhp" (OuterVolumeSpecName: "kube-api-access-rgmhp") pod "e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" (UID: "e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98"). InnerVolumeSpecName "kube-api-access-rgmhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.885284 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558b00fd-2589-4842-8cba-db0cffe8c826-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.885310 4730 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.885324 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.885323 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/558b00fd-2589-4842-8cba-db0cffe8c826-kube-api-access-kgx8f" (OuterVolumeSpecName: "kube-api-access-kgx8f") pod "558b00fd-2589-4842-8cba-db0cffe8c826" (UID: "558b00fd-2589-4842-8cba-db0cffe8c826"). InnerVolumeSpecName "kube-api-access-kgx8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.885336 4730 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.885354 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgmhp\" (UniqueName: \"kubernetes.io/projected/e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98-kube-api-access-rgmhp\") on node \"crc\" DevicePath \"\"" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.885367 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxflj\" (UniqueName: \"kubernetes.io/projected/e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3-kube-api-access-jxflj\") on node \"crc\" DevicePath \"\"" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.891346 4730 generic.go:334] "Generic (PLEG): container finished" podID="d5addb8e-1dbc-41a2-8330-8a97251bd52f" containerID="b0b7cf1aa8683df6582d7fa32a0ee12665587f8843d63db7cc45648643eb352c" exitCode=0 Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.891424 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mbtfk" event={"ID":"d5addb8e-1dbc-41a2-8330-8a97251bd52f","Type":"ContainerDied","Data":"b0b7cf1aa8683df6582d7fa32a0ee12665587f8843d63db7cc45648643eb352c"} Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.891495 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mbtfk" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.895662 4730 generic.go:334] "Generic (PLEG): container finished" podID="5a347883-e4f7-4fcd-8920-59519533cf43" containerID="8a137f491e123e26bfe8e53249675fb8ec9405c7b00ec70ee4673e9a88e5d6bf" exitCode=0 Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.895700 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-flpw2" event={"ID":"5a347883-e4f7-4fcd-8920-59519533cf43","Type":"ContainerDied","Data":"8a137f491e123e26bfe8e53249675fb8ec9405c7b00ec70ee4673e9a88e5d6bf"} Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.919560 4730 scope.go:117] "RemoveContainer" containerID="53a23df451c80721daf3b80414ff05d019adbd298f30cf30f417f8af1c2bafc2" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.923958 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-klbh8"] Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.938214 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-flpw2" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.945179 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-klbh8"] Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.947371 4730 scope.go:117] "RemoveContainer" containerID="753a6699245a34bcfcd2383bf2298b09146bcf7e23ec3ecf85f51c516941cd06" Mar 20 15:46:30 crc kubenswrapper[4730]: E0320 15:46:30.949040 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"753a6699245a34bcfcd2383bf2298b09146bcf7e23ec3ecf85f51c516941cd06\": container with ID starting with 753a6699245a34bcfcd2383bf2298b09146bcf7e23ec3ecf85f51c516941cd06 not found: ID does not exist" containerID="753a6699245a34bcfcd2383bf2298b09146bcf7e23ec3ecf85f51c516941cd06" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.949067 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"753a6699245a34bcfcd2383bf2298b09146bcf7e23ec3ecf85f51c516941cd06"} err="failed to get container status \"753a6699245a34bcfcd2383bf2298b09146bcf7e23ec3ecf85f51c516941cd06\": rpc error: code = NotFound desc = could not find container \"753a6699245a34bcfcd2383bf2298b09146bcf7e23ec3ecf85f51c516941cd06\": container with ID starting with 753a6699245a34bcfcd2383bf2298b09146bcf7e23ec3ecf85f51c516941cd06 not found: ID does not exist" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.949086 4730 scope.go:117] "RemoveContainer" containerID="53a23df451c80721daf3b80414ff05d019adbd298f30cf30f417f8af1c2bafc2" Mar 20 15:46:30 crc kubenswrapper[4730]: E0320 15:46:30.949390 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53a23df451c80721daf3b80414ff05d019adbd298f30cf30f417f8af1c2bafc2\": container with ID starting with 53a23df451c80721daf3b80414ff05d019adbd298f30cf30f417f8af1c2bafc2 not found: ID does not exist" containerID="53a23df451c80721daf3b80414ff05d019adbd298f30cf30f417f8af1c2bafc2" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.949427 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53a23df451c80721daf3b80414ff05d019adbd298f30cf30f417f8af1c2bafc2"} err="failed to get container status \"53a23df451c80721daf3b80414ff05d019adbd298f30cf30f417f8af1c2bafc2\": rpc error: code = NotFound desc = could not find container \"53a23df451c80721daf3b80414ff05d019adbd298f30cf30f417f8af1c2bafc2\": container with ID starting with 53a23df451c80721daf3b80414ff05d019adbd298f30cf30f417f8af1c2bafc2 not found: ID does not exist" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.949451 4730 scope.go:117] "RemoveContainer" containerID="3303b366b010494b00cff91f0adf58b15d0be7946981888a990192d9cd69b3fa" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.973153 4730 scope.go:117] "RemoveContainer" containerID="2c3e34ad9ea0b6c3222cf006f08a02a03e69e35e189c65669c0748e767b79f75" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.975504 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" (UID: "e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.985721 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5addb8e-1dbc-41a2-8330-8a97251bd52f-catalog-content\") pod \"d5addb8e-1dbc-41a2-8330-8a97251bd52f\" (UID: \"d5addb8e-1dbc-41a2-8330-8a97251bd52f\") " Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.986071 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd6js\" (UniqueName: \"kubernetes.io/projected/d5addb8e-1dbc-41a2-8330-8a97251bd52f-kube-api-access-rd6js\") pod \"d5addb8e-1dbc-41a2-8330-8a97251bd52f\" (UID: \"d5addb8e-1dbc-41a2-8330-8a97251bd52f\") " Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.986204 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a347883-e4f7-4fcd-8920-59519533cf43-catalog-content\") pod \"5a347883-e4f7-4fcd-8920-59519533cf43\" (UID: \"5a347883-e4f7-4fcd-8920-59519533cf43\") " Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.986409 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5addb8e-1dbc-41a2-8330-8a97251bd52f-utilities\") pod \"d5addb8e-1dbc-41a2-8330-8a97251bd52f\" (UID: \"d5addb8e-1dbc-41a2-8330-8a97251bd52f\") " Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.986525 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a347883-e4f7-4fcd-8920-59519533cf43-utilities\") pod \"5a347883-e4f7-4fcd-8920-59519533cf43\" (UID: \"5a347883-e4f7-4fcd-8920-59519533cf43\") " Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.986632 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs46z\" (UniqueName: \"kubernetes.io/projected/5a347883-e4f7-4fcd-8920-59519533cf43-kube-api-access-hs46z\") pod \"5a347883-e4f7-4fcd-8920-59519533cf43\" (UID: \"5a347883-e4f7-4fcd-8920-59519533cf43\") " Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.987173 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.987399 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgx8f\" (UniqueName: \"kubernetes.io/projected/558b00fd-2589-4842-8cba-db0cffe8c826-kube-api-access-kgx8f\") on node \"crc\" DevicePath \"\"" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.987739 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5addb8e-1dbc-41a2-8330-8a97251bd52f-utilities" (OuterVolumeSpecName: "utilities") pod "d5addb8e-1dbc-41a2-8330-8a97251bd52f" (UID: "d5addb8e-1dbc-41a2-8330-8a97251bd52f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.987829 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a347883-e4f7-4fcd-8920-59519533cf43-utilities" (OuterVolumeSpecName: "utilities") pod "5a347883-e4f7-4fcd-8920-59519533cf43" (UID: "5a347883-e4f7-4fcd-8920-59519533cf43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.990520 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5addb8e-1dbc-41a2-8330-8a97251bd52f-kube-api-access-rd6js" (OuterVolumeSpecName: "kube-api-access-rd6js") pod "d5addb8e-1dbc-41a2-8330-8a97251bd52f" (UID: "d5addb8e-1dbc-41a2-8330-8a97251bd52f"). InnerVolumeSpecName "kube-api-access-rd6js". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.990872 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a347883-e4f7-4fcd-8920-59519533cf43-kube-api-access-hs46z" (OuterVolumeSpecName: "kube-api-access-hs46z") pod "5a347883-e4f7-4fcd-8920-59519533cf43" (UID: "5a347883-e4f7-4fcd-8920-59519533cf43"). InnerVolumeSpecName "kube-api-access-hs46z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:46:30 crc kubenswrapper[4730]: I0320 15:46:30.991826 4730 scope.go:117] "RemoveContainer" containerID="0936786d2af592781681255288d0bc8bfa0e6ea172412747ed1615c053176e9a" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.005419 4730 scope.go:117] "RemoveContainer" containerID="3303b366b010494b00cff91f0adf58b15d0be7946981888a990192d9cd69b3fa" Mar 20 15:46:31 crc kubenswrapper[4730]: E0320 15:46:31.005864 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3303b366b010494b00cff91f0adf58b15d0be7946981888a990192d9cd69b3fa\": container with ID starting with 3303b366b010494b00cff91f0adf58b15d0be7946981888a990192d9cd69b3fa not found: ID does not exist" containerID="3303b366b010494b00cff91f0adf58b15d0be7946981888a990192d9cd69b3fa" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.005906 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3303b366b010494b00cff91f0adf58b15d0be7946981888a990192d9cd69b3fa"} err="failed to get container status \"3303b366b010494b00cff91f0adf58b15d0be7946981888a990192d9cd69b3fa\": rpc error: code = NotFound desc = could not find container \"3303b366b010494b00cff91f0adf58b15d0be7946981888a990192d9cd69b3fa\": container with ID starting with 3303b366b010494b00cff91f0adf58b15d0be7946981888a990192d9cd69b3fa not found: ID does not exist" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.005960 4730 scope.go:117] "RemoveContainer" containerID="2c3e34ad9ea0b6c3222cf006f08a02a03e69e35e189c65669c0748e767b79f75" Mar 20 15:46:31 crc kubenswrapper[4730]: E0320 15:46:31.007721 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c3e34ad9ea0b6c3222cf006f08a02a03e69e35e189c65669c0748e767b79f75\": container with ID starting with 2c3e34ad9ea0b6c3222cf006f08a02a03e69e35e189c65669c0748e767b79f75 not found: ID does not exist" containerID="2c3e34ad9ea0b6c3222cf006f08a02a03e69e35e189c65669c0748e767b79f75" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.009225 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c3e34ad9ea0b6c3222cf006f08a02a03e69e35e189c65669c0748e767b79f75"} err="failed to get container status \"2c3e34ad9ea0b6c3222cf006f08a02a03e69e35e189c65669c0748e767b79f75\": rpc error: code = NotFound desc = could not find container \"2c3e34ad9ea0b6c3222cf006f08a02a03e69e35e189c65669c0748e767b79f75\": container with ID starting with 2c3e34ad9ea0b6c3222cf006f08a02a03e69e35e189c65669c0748e767b79f75 not found: ID does not exist" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.009274 4730 scope.go:117] "RemoveContainer" containerID="0936786d2af592781681255288d0bc8bfa0e6ea172412747ed1615c053176e9a" Mar 20 15:46:31 crc kubenswrapper[4730]: E0320 15:46:31.010775 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0936786d2af592781681255288d0bc8bfa0e6ea172412747ed1615c053176e9a\": container with ID starting with 0936786d2af592781681255288d0bc8bfa0e6ea172412747ed1615c053176e9a not found: ID does not exist" containerID="0936786d2af592781681255288d0bc8bfa0e6ea172412747ed1615c053176e9a" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.010821 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0936786d2af592781681255288d0bc8bfa0e6ea172412747ed1615c053176e9a"} err="failed to get container status \"0936786d2af592781681255288d0bc8bfa0e6ea172412747ed1615c053176e9a\": rpc error: code = NotFound desc = could not find container \"0936786d2af592781681255288d0bc8bfa0e6ea172412747ed1615c053176e9a\": container with ID starting with 0936786d2af592781681255288d0bc8bfa0e6ea172412747ed1615c053176e9a not found: ID does not exist" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.010854 4730 scope.go:117] "RemoveContainer" containerID="be8ad8b5a0faeab783b7ccafb3517fc045687f3f5ccf91534d7b0f3ee31c621e" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.024033 4730 scope.go:117] "RemoveContainer" containerID="0b9feaef40e353d64a848dba5e34276e42725c50bac6122fd4b5265fc07ad6a1" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.029866 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a347883-e4f7-4fcd-8920-59519533cf43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a347883-e4f7-4fcd-8920-59519533cf43" (UID: "5a347883-e4f7-4fcd-8920-59519533cf43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.041604 4730 scope.go:117] "RemoveContainer" containerID="c2ed9a8424613f58ef80e56f4945d0a2f01a1e4ab20cce8fcafa4c0b7fc30d73" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.044315 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/558b00fd-2589-4842-8cba-db0cffe8c826-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "558b00fd-2589-4842-8cba-db0cffe8c826" (UID: "558b00fd-2589-4842-8cba-db0cffe8c826"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.057279 4730 scope.go:117] "RemoveContainer" containerID="be8ad8b5a0faeab783b7ccafb3517fc045687f3f5ccf91534d7b0f3ee31c621e" Mar 20 15:46:31 crc kubenswrapper[4730]: E0320 15:46:31.057793 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be8ad8b5a0faeab783b7ccafb3517fc045687f3f5ccf91534d7b0f3ee31c621e\": container with ID starting with be8ad8b5a0faeab783b7ccafb3517fc045687f3f5ccf91534d7b0f3ee31c621e not found: ID does not exist" containerID="be8ad8b5a0faeab783b7ccafb3517fc045687f3f5ccf91534d7b0f3ee31c621e" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.057845 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be8ad8b5a0faeab783b7ccafb3517fc045687f3f5ccf91534d7b0f3ee31c621e"} err="failed to get container status \"be8ad8b5a0faeab783b7ccafb3517fc045687f3f5ccf91534d7b0f3ee31c621e\": rpc error: code = NotFound desc = could not find container \"be8ad8b5a0faeab783b7ccafb3517fc045687f3f5ccf91534d7b0f3ee31c621e\": container with ID starting with be8ad8b5a0faeab783b7ccafb3517fc045687f3f5ccf91534d7b0f3ee31c621e not found: ID does not exist" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.057880 4730 scope.go:117] "RemoveContainer" containerID="0b9feaef40e353d64a848dba5e34276e42725c50bac6122fd4b5265fc07ad6a1" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.058148 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5addb8e-1dbc-41a2-8330-8a97251bd52f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5addb8e-1dbc-41a2-8330-8a97251bd52f" (UID: "d5addb8e-1dbc-41a2-8330-8a97251bd52f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:46:31 crc kubenswrapper[4730]: E0320 15:46:31.058285 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b9feaef40e353d64a848dba5e34276e42725c50bac6122fd4b5265fc07ad6a1\": container with ID starting with 0b9feaef40e353d64a848dba5e34276e42725c50bac6122fd4b5265fc07ad6a1 not found: ID does not exist" containerID="0b9feaef40e353d64a848dba5e34276e42725c50bac6122fd4b5265fc07ad6a1" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.058315 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b9feaef40e353d64a848dba5e34276e42725c50bac6122fd4b5265fc07ad6a1"} err="failed to get container status \"0b9feaef40e353d64a848dba5e34276e42725c50bac6122fd4b5265fc07ad6a1\": rpc error: code = NotFound desc = could not find container \"0b9feaef40e353d64a848dba5e34276e42725c50bac6122fd4b5265fc07ad6a1\": container with ID starting with 0b9feaef40e353d64a848dba5e34276e42725c50bac6122fd4b5265fc07ad6a1 not found: ID does not exist" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.058334 4730 scope.go:117] "RemoveContainer" containerID="c2ed9a8424613f58ef80e56f4945d0a2f01a1e4ab20cce8fcafa4c0b7fc30d73" Mar 20 15:46:31 crc kubenswrapper[4730]: E0320 15:46:31.058751 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2ed9a8424613f58ef80e56f4945d0a2f01a1e4ab20cce8fcafa4c0b7fc30d73\": container with ID starting with c2ed9a8424613f58ef80e56f4945d0a2f01a1e4ab20cce8fcafa4c0b7fc30d73 not found: ID does not exist" containerID="c2ed9a8424613f58ef80e56f4945d0a2f01a1e4ab20cce8fcafa4c0b7fc30d73" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.058783 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2ed9a8424613f58ef80e56f4945d0a2f01a1e4ab20cce8fcafa4c0b7fc30d73"} err="failed to get container status \"c2ed9a8424613f58ef80e56f4945d0a2f01a1e4ab20cce8fcafa4c0b7fc30d73\": rpc error: code = NotFound desc = could not find container \"c2ed9a8424613f58ef80e56f4945d0a2f01a1e4ab20cce8fcafa4c0b7fc30d73\": container with ID starting with c2ed9a8424613f58ef80e56f4945d0a2f01a1e4ab20cce8fcafa4c0b7fc30d73 not found: ID does not exist" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.058810 4730 scope.go:117] "RemoveContainer" containerID="b0b7cf1aa8683df6582d7fa32a0ee12665587f8843d63db7cc45648643eb352c" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.073265 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b842f"] Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.073401 4730 scope.go:117] "RemoveContainer" containerID="027ff3ee79dd3768bc7352d26b5e9a7647079a2f17aa58047546ce0332c5b335" Mar 20 15:46:31 crc kubenswrapper[4730]: W0320 15:46:31.077383 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3eaa81f_92a9_49fa_aca0_1e8e35920f20.slice/crio-f2983f2e7c0e6fdb3a360dbaf18785290bfa97fa39dfa4dd90909de1062ea9c1 WatchSource:0}: Error finding container f2983f2e7c0e6fdb3a360dbaf18785290bfa97fa39dfa4dd90909de1062ea9c1: Status 404 returned error can't find the container with id f2983f2e7c0e6fdb3a360dbaf18785290bfa97fa39dfa4dd90909de1062ea9c1 Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.091468 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5addb8e-1dbc-41a2-8330-8a97251bd52f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.091523 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558b00fd-2589-4842-8cba-db0cffe8c826-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.091538 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd6js\" (UniqueName: \"kubernetes.io/projected/d5addb8e-1dbc-41a2-8330-8a97251bd52f-kube-api-access-rd6js\") on node \"crc\" DevicePath \"\"" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.091549 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a347883-e4f7-4fcd-8920-59519533cf43-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.091559 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5addb8e-1dbc-41a2-8330-8a97251bd52f-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.091567 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a347883-e4f7-4fcd-8920-59519533cf43-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.091575 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs46z\" (UniqueName: \"kubernetes.io/projected/5a347883-e4f7-4fcd-8920-59519533cf43-kube-api-access-hs46z\") on node \"crc\" DevicePath \"\"" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.092081 4730 scope.go:117] "RemoveContainer" containerID="1ce763ed176ec4f4dede58163ade6fda497d7444c6c6f195c24a524a711de167" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.209988 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rlnqc"] Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.222269 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rlnqc"] Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.225601 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8rptq"] Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.232271 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8rptq"] Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.235464 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mbtfk"] Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.238268 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mbtfk"] Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.546120 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="558b00fd-2589-4842-8cba-db0cffe8c826" path="/var/lib/kubelet/pods/558b00fd-2589-4842-8cba-db0cffe8c826/volumes" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.547029 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5addb8e-1dbc-41a2-8330-8a97251bd52f" path="/var/lib/kubelet/pods/d5addb8e-1dbc-41a2-8330-8a97251bd52f/volumes" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.547744 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3" path="/var/lib/kubelet/pods/e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3/volumes" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.548671 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" path="/var/lib/kubelet/pods/e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98/volumes" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.903420 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b842f" event={"ID":"b3eaa81f-92a9-49fa-aca0-1e8e35920f20","Type":"ContainerStarted","Data":"23ffe972b9481dc0923d0a873a956bf4171c6c5123d4b682bb2650dec152f60f"} Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.903460 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b842f" event={"ID":"b3eaa81f-92a9-49fa-aca0-1e8e35920f20","Type":"ContainerStarted","Data":"f2983f2e7c0e6fdb3a360dbaf18785290bfa97fa39dfa4dd90909de1062ea9c1"} Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.903626 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-b842f" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.907332 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-flpw2" event={"ID":"5a347883-e4f7-4fcd-8920-59519533cf43","Type":"ContainerDied","Data":"9a27ed5cf68d2bc6928d37904a276f94c614d0e58deaf1534a9994ffbccaa224"} Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.907381 4730 scope.go:117] "RemoveContainer" containerID="8a137f491e123e26bfe8e53249675fb8ec9405c7b00ec70ee4673e9a88e5d6bf" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.907530 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-flpw2" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.908404 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-b842f" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.930832 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-b842f" podStartSLOduration=1.930811286 podStartE2EDuration="1.930811286s" podCreationTimestamp="2026-03-20 15:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:46:31.927056243 +0000 UTC m=+451.140427622" watchObservedRunningTime="2026-03-20 15:46:31.930811286 +0000 UTC m=+451.144182665" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.931115 4730 scope.go:117] "RemoveContainer" containerID="9a83b1f8dc654ef4f4276c65729d5eabf19cc5bf1944836a69eeb1d195139aba" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.945643 4730 scope.go:117] "RemoveContainer" containerID="1e2d0f7b622d4a27e1b76b3b32f61e354d6ed5f7ddeb8e6368356819c35fc74f" Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.965169 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-flpw2"] Mar 20 15:46:31 crc kubenswrapper[4730]: I0320 15:46:31.975278 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-flpw2"] Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.514885 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vk6rc"] Mar 20 15:46:32 crc kubenswrapper[4730]: E0320 15:46:32.515389 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a347883-e4f7-4fcd-8920-59519533cf43" containerName="extract-utilities" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.515405 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a347883-e4f7-4fcd-8920-59519533cf43" containerName="extract-utilities" Mar 20 15:46:32 crc kubenswrapper[4730]: E0320 15:46:32.515416 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3" containerName="marketplace-operator" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.515424 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3" containerName="marketplace-operator" Mar 20 15:46:32 crc kubenswrapper[4730]: E0320 15:46:32.515433 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5addb8e-1dbc-41a2-8330-8a97251bd52f" containerName="registry-server" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.515441 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5addb8e-1dbc-41a2-8330-8a97251bd52f" containerName="registry-server" Mar 20 15:46:32 crc kubenswrapper[4730]: E0320 15:46:32.515449 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="558b00fd-2589-4842-8cba-db0cffe8c826" containerName="extract-content" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.515456 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="558b00fd-2589-4842-8cba-db0cffe8c826" containerName="extract-content" Mar 20 15:46:32 crc kubenswrapper[4730]: E0320 15:46:32.515469 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a347883-e4f7-4fcd-8920-59519533cf43" containerName="extract-content" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.515478 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a347883-e4f7-4fcd-8920-59519533cf43" containerName="extract-content" Mar 20 15:46:32 crc kubenswrapper[4730]: E0320 15:46:32.515486 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" containerName="extract-content" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.515493 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" containerName="extract-content" Mar 20 15:46:32 crc kubenswrapper[4730]: E0320 15:46:32.515501 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="558b00fd-2589-4842-8cba-db0cffe8c826" containerName="registry-server" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.515508 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="558b00fd-2589-4842-8cba-db0cffe8c826" containerName="registry-server" Mar 20 15:46:32 crc kubenswrapper[4730]: E0320 15:46:32.515517 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" containerName="registry-server" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.515524 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" containerName="registry-server" Mar 20 15:46:32 crc kubenswrapper[4730]: E0320 15:46:32.515536 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" containerName="extract-utilities" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.515544 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" containerName="extract-utilities" Mar 20 15:46:32 crc kubenswrapper[4730]: E0320 15:46:32.515553 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5addb8e-1dbc-41a2-8330-8a97251bd52f" containerName="extract-utilities" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.515560 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5addb8e-1dbc-41a2-8330-8a97251bd52f" containerName="extract-utilities" Mar 20 15:46:32 crc kubenswrapper[4730]: E0320 15:46:32.515570 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5addb8e-1dbc-41a2-8330-8a97251bd52f" containerName="extract-content" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.515577 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5addb8e-1dbc-41a2-8330-8a97251bd52f" containerName="extract-content" Mar 20 15:46:32 crc kubenswrapper[4730]: E0320 15:46:32.515586 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="558b00fd-2589-4842-8cba-db0cffe8c826" containerName="extract-utilities" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.515593 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="558b00fd-2589-4842-8cba-db0cffe8c826" containerName="extract-utilities" Mar 20 15:46:32 crc kubenswrapper[4730]: E0320 15:46:32.515600 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a347883-e4f7-4fcd-8920-59519533cf43" containerName="registry-server" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.515605 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a347883-e4f7-4fcd-8920-59519533cf43" containerName="registry-server" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.515703 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a347883-e4f7-4fcd-8920-59519533cf43" containerName="registry-server" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.515714 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5addb8e-1dbc-41a2-8330-8a97251bd52f" containerName="registry-server" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.515727 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3" containerName="marketplace-operator" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.515735 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="558b00fd-2589-4842-8cba-db0cffe8c826" containerName="registry-server" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.515742 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3" containerName="marketplace-operator" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.515752 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1e9bea0-2eab-4ac3-ae73-6ad7bf4d7a98" containerName="registry-server" Mar 20 15:46:32 crc kubenswrapper[4730]: E0320 15:46:32.515835 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3" containerName="marketplace-operator" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.515843 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ea8f10-d0d5-4b2b-9cc1-58eb2d785af3" containerName="marketplace-operator" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.516943 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vk6rc" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.520180 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.531422 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vk6rc"] Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.607967 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbqqz\" (UniqueName: \"kubernetes.io/projected/70d03566-9776-4dcc-84b5-17281f8ae66e-kube-api-access-nbqqz\") pod \"redhat-operators-vk6rc\" (UID: \"70d03566-9776-4dcc-84b5-17281f8ae66e\") " pod="openshift-marketplace/redhat-operators-vk6rc" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.608134 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70d03566-9776-4dcc-84b5-17281f8ae66e-utilities\") pod \"redhat-operators-vk6rc\" (UID: \"70d03566-9776-4dcc-84b5-17281f8ae66e\") " pod="openshift-marketplace/redhat-operators-vk6rc" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.608170 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70d03566-9776-4dcc-84b5-17281f8ae66e-catalog-content\") pod \"redhat-operators-vk6rc\" (UID: \"70d03566-9776-4dcc-84b5-17281f8ae66e\") " pod="openshift-marketplace/redhat-operators-vk6rc" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.709726 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70d03566-9776-4dcc-84b5-17281f8ae66e-utilities\") pod \"redhat-operators-vk6rc\" (UID: \"70d03566-9776-4dcc-84b5-17281f8ae66e\") " pod="openshift-marketplace/redhat-operators-vk6rc" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.709790 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70d03566-9776-4dcc-84b5-17281f8ae66e-catalog-content\") pod \"redhat-operators-vk6rc\" (UID: \"70d03566-9776-4dcc-84b5-17281f8ae66e\") " pod="openshift-marketplace/redhat-operators-vk6rc" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.709841 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbqqz\" (UniqueName: \"kubernetes.io/projected/70d03566-9776-4dcc-84b5-17281f8ae66e-kube-api-access-nbqqz\") pod \"redhat-operators-vk6rc\" (UID: \"70d03566-9776-4dcc-84b5-17281f8ae66e\") " pod="openshift-marketplace/redhat-operators-vk6rc" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.710369 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70d03566-9776-4dcc-84b5-17281f8ae66e-catalog-content\") pod \"redhat-operators-vk6rc\" (UID: \"70d03566-9776-4dcc-84b5-17281f8ae66e\") " pod="openshift-marketplace/redhat-operators-vk6rc" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.710364 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70d03566-9776-4dcc-84b5-17281f8ae66e-utilities\") pod \"redhat-operators-vk6rc\" (UID: \"70d03566-9776-4dcc-84b5-17281f8ae66e\") " pod="openshift-marketplace/redhat-operators-vk6rc" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.729455 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbqqz\" (UniqueName: \"kubernetes.io/projected/70d03566-9776-4dcc-84b5-17281f8ae66e-kube-api-access-nbqqz\") pod \"redhat-operators-vk6rc\" (UID: \"70d03566-9776-4dcc-84b5-17281f8ae66e\") " pod="openshift-marketplace/redhat-operators-vk6rc" Mar 20 15:46:32 crc kubenswrapper[4730]: I0320 15:46:32.831385 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vk6rc" Mar 20 15:46:33 crc kubenswrapper[4730]: I0320 15:46:33.261360 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vk6rc"] Mar 20 15:46:33 crc kubenswrapper[4730]: I0320 15:46:33.541382 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a347883-e4f7-4fcd-8920-59519533cf43" path="/var/lib/kubelet/pods/5a347883-e4f7-4fcd-8920-59519533cf43/volumes" Mar 20 15:46:33 crc kubenswrapper[4730]: I0320 15:46:33.925061 4730 generic.go:334] "Generic (PLEG): container finished" podID="70d03566-9776-4dcc-84b5-17281f8ae66e" containerID="c7a40db6c3e39bcdc6ed4f1aa1615de26a75d2d529d29d0be891dcfb62c4c11c" exitCode=0 Mar 20 15:46:33 crc kubenswrapper[4730]: I0320 15:46:33.925147 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk6rc" event={"ID":"70d03566-9776-4dcc-84b5-17281f8ae66e","Type":"ContainerDied","Data":"c7a40db6c3e39bcdc6ed4f1aa1615de26a75d2d529d29d0be891dcfb62c4c11c"} Mar 20 15:46:33 crc kubenswrapper[4730]: I0320 15:46:33.925172 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk6rc" event={"ID":"70d03566-9776-4dcc-84b5-17281f8ae66e","Type":"ContainerStarted","Data":"89095373adcfb1a8fa95226ed1d5a210203e8b1527f394de91dacd42089f46d4"} Mar 20 15:46:34 crc kubenswrapper[4730]: I0320 15:46:34.317698 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rkhd6"] Mar 20 15:46:34 crc kubenswrapper[4730]: I0320 15:46:34.318607 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rkhd6" Mar 20 15:46:34 crc kubenswrapper[4730]: I0320 15:46:34.321064 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 20 15:46:34 crc kubenswrapper[4730]: I0320 15:46:34.329561 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6e8fab3-7ebb-4b3f-af2c-fcc299e01381-catalog-content\") pod \"certified-operators-rkhd6\" (UID: \"d6e8fab3-7ebb-4b3f-af2c-fcc299e01381\") " pod="openshift-marketplace/certified-operators-rkhd6" Mar 20 15:46:34 crc kubenswrapper[4730]: I0320 15:46:34.329628 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6bc6\" (UniqueName: \"kubernetes.io/projected/d6e8fab3-7ebb-4b3f-af2c-fcc299e01381-kube-api-access-z6bc6\") pod \"certified-operators-rkhd6\" (UID: \"d6e8fab3-7ebb-4b3f-af2c-fcc299e01381\") " pod="openshift-marketplace/certified-operators-rkhd6" Mar 20 15:46:34 crc kubenswrapper[4730]: I0320 15:46:34.329683 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6e8fab3-7ebb-4b3f-af2c-fcc299e01381-utilities\") pod \"certified-operators-rkhd6\" (UID: \"d6e8fab3-7ebb-4b3f-af2c-fcc299e01381\") " pod="openshift-marketplace/certified-operators-rkhd6" Mar 20 15:46:34 crc kubenswrapper[4730]: I0320 15:46:34.335122 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rkhd6"] Mar 20 15:46:34 crc kubenswrapper[4730]: I0320 15:46:34.430607 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6e8fab3-7ebb-4b3f-af2c-fcc299e01381-catalog-content\") pod \"certified-operators-rkhd6\" (UID: \"d6e8fab3-7ebb-4b3f-af2c-fcc299e01381\") " pod="openshift-marketplace/certified-operators-rkhd6" Mar 20 15:46:34 crc kubenswrapper[4730]: I0320 15:46:34.430653 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6bc6\" (UniqueName: \"kubernetes.io/projected/d6e8fab3-7ebb-4b3f-af2c-fcc299e01381-kube-api-access-z6bc6\") pod \"certified-operators-rkhd6\" (UID: \"d6e8fab3-7ebb-4b3f-af2c-fcc299e01381\") " pod="openshift-marketplace/certified-operators-rkhd6" Mar 20 15:46:34 crc kubenswrapper[4730]: I0320 15:46:34.430697 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6e8fab3-7ebb-4b3f-af2c-fcc299e01381-utilities\") pod \"certified-operators-rkhd6\" (UID: \"d6e8fab3-7ebb-4b3f-af2c-fcc299e01381\") " pod="openshift-marketplace/certified-operators-rkhd6" Mar 20 15:46:34 crc kubenswrapper[4730]: I0320 15:46:34.431103 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6e8fab3-7ebb-4b3f-af2c-fcc299e01381-catalog-content\") pod \"certified-operators-rkhd6\" (UID: \"d6e8fab3-7ebb-4b3f-af2c-fcc299e01381\") " pod="openshift-marketplace/certified-operators-rkhd6" Mar 20 15:46:34 crc kubenswrapper[4730]: I0320 15:46:34.431118 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6e8fab3-7ebb-4b3f-af2c-fcc299e01381-utilities\") pod \"certified-operators-rkhd6\" (UID: \"d6e8fab3-7ebb-4b3f-af2c-fcc299e01381\") " pod="openshift-marketplace/certified-operators-rkhd6" Mar 20 15:46:34 crc kubenswrapper[4730]: I0320 15:46:34.447738 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6bc6\" (UniqueName: \"kubernetes.io/projected/d6e8fab3-7ebb-4b3f-af2c-fcc299e01381-kube-api-access-z6bc6\") pod \"certified-operators-rkhd6\" (UID: \"d6e8fab3-7ebb-4b3f-af2c-fcc299e01381\") " pod="openshift-marketplace/certified-operators-rkhd6" Mar 20 15:46:34 crc kubenswrapper[4730]: I0320 15:46:34.641014 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rkhd6" Mar 20 15:46:34 crc kubenswrapper[4730]: I0320 15:46:34.915816 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bbtzz"] Mar 20 15:46:34 crc kubenswrapper[4730]: I0320 15:46:34.917775 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bbtzz" Mar 20 15:46:34 crc kubenswrapper[4730]: I0320 15:46:34.920215 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 20 15:46:34 crc kubenswrapper[4730]: I0320 15:46:34.928775 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bbtzz"] Mar 20 15:46:34 crc kubenswrapper[4730]: I0320 15:46:34.935828 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8a22a9f-2975-485c-99f7-05e6b934e0a1-catalog-content\") pod \"community-operators-bbtzz\" (UID: \"d8a22a9f-2975-485c-99f7-05e6b934e0a1\") " pod="openshift-marketplace/community-operators-bbtzz" Mar 20 15:46:34 crc kubenswrapper[4730]: I0320 15:46:34.935898 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw6p9\" (UniqueName: \"kubernetes.io/projected/d8a22a9f-2975-485c-99f7-05e6b934e0a1-kube-api-access-zw6p9\") pod \"community-operators-bbtzz\" (UID: \"d8a22a9f-2975-485c-99f7-05e6b934e0a1\") " pod="openshift-marketplace/community-operators-bbtzz" Mar 20 15:46:34 crc kubenswrapper[4730]: I0320 15:46:34.935952 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8a22a9f-2975-485c-99f7-05e6b934e0a1-utilities\") pod \"community-operators-bbtzz\" (UID: \"d8a22a9f-2975-485c-99f7-05e6b934e0a1\") " pod="openshift-marketplace/community-operators-bbtzz" Mar 20 15:46:35 crc kubenswrapper[4730]: I0320 15:46:35.037072 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8a22a9f-2975-485c-99f7-05e6b934e0a1-catalog-content\") pod \"community-operators-bbtzz\" (UID: \"d8a22a9f-2975-485c-99f7-05e6b934e0a1\") " pod="openshift-marketplace/community-operators-bbtzz" Mar 20 15:46:35 crc kubenswrapper[4730]: I0320 15:46:35.037160 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw6p9\" (UniqueName: \"kubernetes.io/projected/d8a22a9f-2975-485c-99f7-05e6b934e0a1-kube-api-access-zw6p9\") pod \"community-operators-bbtzz\" (UID: \"d8a22a9f-2975-485c-99f7-05e6b934e0a1\") " pod="openshift-marketplace/community-operators-bbtzz" Mar 20 15:46:35 crc kubenswrapper[4730]: I0320 15:46:35.037228 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8a22a9f-2975-485c-99f7-05e6b934e0a1-utilities\") pod \"community-operators-bbtzz\" (UID: \"d8a22a9f-2975-485c-99f7-05e6b934e0a1\") " pod="openshift-marketplace/community-operators-bbtzz" Mar 20 15:46:35 crc kubenswrapper[4730]: I0320 15:46:35.038044 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8a22a9f-2975-485c-99f7-05e6b934e0a1-utilities\") pod \"community-operators-bbtzz\" (UID: \"d8a22a9f-2975-485c-99f7-05e6b934e0a1\") " pod="openshift-marketplace/community-operators-bbtzz" Mar 20 15:46:35 crc kubenswrapper[4730]: I0320 15:46:35.038044 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8a22a9f-2975-485c-99f7-05e6b934e0a1-catalog-content\") pod \"community-operators-bbtzz\" (UID: \"d8a22a9f-2975-485c-99f7-05e6b934e0a1\") " pod="openshift-marketplace/community-operators-bbtzz" Mar 20 15:46:35 crc kubenswrapper[4730]: I0320 15:46:35.055862 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rkhd6"] Mar 20 15:46:35 crc kubenswrapper[4730]: I0320 15:46:35.061312 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw6p9\" (UniqueName: \"kubernetes.io/projected/d8a22a9f-2975-485c-99f7-05e6b934e0a1-kube-api-access-zw6p9\") pod \"community-operators-bbtzz\" (UID: \"d8a22a9f-2975-485c-99f7-05e6b934e0a1\") " pod="openshift-marketplace/community-operators-bbtzz" Mar 20 15:46:35 crc kubenswrapper[4730]: W0320 15:46:35.069314 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6e8fab3_7ebb_4b3f_af2c_fcc299e01381.slice/crio-b3a96be23f133a45e47eabf32d63cbb3b3f6505ef19ae7141a386229ffe971c0 WatchSource:0}: Error finding container b3a96be23f133a45e47eabf32d63cbb3b3f6505ef19ae7141a386229ffe971c0: Status 404 returned error can't find the container with id b3a96be23f133a45e47eabf32d63cbb3b3f6505ef19ae7141a386229ffe971c0 Mar 20 15:46:35 crc kubenswrapper[4730]: I0320 15:46:35.236757 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bbtzz" Mar 20 15:46:35 crc kubenswrapper[4730]: I0320 15:46:35.605220 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bbtzz"] Mar 20 15:46:35 crc kubenswrapper[4730]: W0320 15:46:35.623314 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8a22a9f_2975_485c_99f7_05e6b934e0a1.slice/crio-663eae788e97fc6ca48a1450b50b94b2e512a4a113ab307d9069b46bc8aa6e1b WatchSource:0}: Error finding container 663eae788e97fc6ca48a1450b50b94b2e512a4a113ab307d9069b46bc8aa6e1b: Status 404 returned error can't find the container with id 663eae788e97fc6ca48a1450b50b94b2e512a4a113ab307d9069b46bc8aa6e1b Mar 20 15:46:35 crc kubenswrapper[4730]: I0320 15:46:35.799314 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-mp2s2" Mar 20 15:46:35 crc kubenswrapper[4730]: I0320 15:46:35.867197 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bdpg6"] Mar 20 15:46:35 crc kubenswrapper[4730]: I0320 15:46:35.938008 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk6rc" event={"ID":"70d03566-9776-4dcc-84b5-17281f8ae66e","Type":"ContainerStarted","Data":"73da7c88a758b29a10ab785c5df8fff5977b17912b7a7aa61adcd6a4298dd476"} Mar 20 15:46:35 crc kubenswrapper[4730]: I0320 15:46:35.940428 4730 generic.go:334] "Generic (PLEG): container finished" podID="d6e8fab3-7ebb-4b3f-af2c-fcc299e01381" containerID="73ab3b31d9dda4fc31017dd8ae10b58885314deb0ed5858dbad85902b8789321" exitCode=0 Mar 20 15:46:35 crc kubenswrapper[4730]: I0320 15:46:35.940568 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rkhd6" event={"ID":"d6e8fab3-7ebb-4b3f-af2c-fcc299e01381","Type":"ContainerDied","Data":"73ab3b31d9dda4fc31017dd8ae10b58885314deb0ed5858dbad85902b8789321"} Mar 20 15:46:35 crc kubenswrapper[4730]: I0320 15:46:35.940616 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rkhd6" event={"ID":"d6e8fab3-7ebb-4b3f-af2c-fcc299e01381","Type":"ContainerStarted","Data":"b3a96be23f133a45e47eabf32d63cbb3b3f6505ef19ae7141a386229ffe971c0"} Mar 20 15:46:35 crc kubenswrapper[4730]: I0320 15:46:35.942776 4730 generic.go:334] "Generic (PLEG): container finished" podID="d8a22a9f-2975-485c-99f7-05e6b934e0a1" containerID="bbc862e72da6db4159a1b63077867ba68600c99429b0bebda558060a0ceca52f" exitCode=0 Mar 20 15:46:35 crc kubenswrapper[4730]: I0320 15:46:35.942860 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbtzz" event={"ID":"d8a22a9f-2975-485c-99f7-05e6b934e0a1","Type":"ContainerDied","Data":"bbc862e72da6db4159a1b63077867ba68600c99429b0bebda558060a0ceca52f"} Mar 20 15:46:35 crc kubenswrapper[4730]: I0320 15:46:35.942933 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbtzz" event={"ID":"d8a22a9f-2975-485c-99f7-05e6b934e0a1","Type":"ContainerStarted","Data":"663eae788e97fc6ca48a1450b50b94b2e512a4a113ab307d9069b46bc8aa6e1b"} Mar 20 15:46:36 crc kubenswrapper[4730]: I0320 15:46:36.714620 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7vhhm"] Mar 20 15:46:36 crc kubenswrapper[4730]: I0320 15:46:36.716507 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7vhhm" Mar 20 15:46:36 crc kubenswrapper[4730]: I0320 15:46:36.718714 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 20 15:46:36 crc kubenswrapper[4730]: I0320 15:46:36.724649 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7vhhm"] Mar 20 15:46:36 crc kubenswrapper[4730]: I0320 15:46:36.860663 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae6da2c-50d0-460f-b29c-5b3e3df439c5-utilities\") pod \"redhat-marketplace-7vhhm\" (UID: \"cae6da2c-50d0-460f-b29c-5b3e3df439c5\") " pod="openshift-marketplace/redhat-marketplace-7vhhm" Mar 20 15:46:36 crc kubenswrapper[4730]: I0320 15:46:36.861418 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae6da2c-50d0-460f-b29c-5b3e3df439c5-catalog-content\") pod \"redhat-marketplace-7vhhm\" (UID: \"cae6da2c-50d0-460f-b29c-5b3e3df439c5\") " pod="openshift-marketplace/redhat-marketplace-7vhhm" Mar 20 15:46:36 crc kubenswrapper[4730]: I0320 15:46:36.861590 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqtlx\" (UniqueName: \"kubernetes.io/projected/cae6da2c-50d0-460f-b29c-5b3e3df439c5-kube-api-access-lqtlx\") pod \"redhat-marketplace-7vhhm\" (UID: \"cae6da2c-50d0-460f-b29c-5b3e3df439c5\") " pod="openshift-marketplace/redhat-marketplace-7vhhm" Mar 20 15:46:36 crc kubenswrapper[4730]: I0320 15:46:36.949906 4730 generic.go:334] "Generic (PLEG): container finished" podID="70d03566-9776-4dcc-84b5-17281f8ae66e" containerID="73da7c88a758b29a10ab785c5df8fff5977b17912b7a7aa61adcd6a4298dd476" exitCode=0 Mar 20 15:46:36 crc kubenswrapper[4730]: I0320 15:46:36.949948 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk6rc" event={"ID":"70d03566-9776-4dcc-84b5-17281f8ae66e","Type":"ContainerDied","Data":"73da7c88a758b29a10ab785c5df8fff5977b17912b7a7aa61adcd6a4298dd476"} Mar 20 15:46:36 crc kubenswrapper[4730]: I0320 15:46:36.962742 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqtlx\" (UniqueName: \"kubernetes.io/projected/cae6da2c-50d0-460f-b29c-5b3e3df439c5-kube-api-access-lqtlx\") pod \"redhat-marketplace-7vhhm\" (UID: \"cae6da2c-50d0-460f-b29c-5b3e3df439c5\") " pod="openshift-marketplace/redhat-marketplace-7vhhm" Mar 20 15:46:36 crc kubenswrapper[4730]: I0320 15:46:36.962893 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae6da2c-50d0-460f-b29c-5b3e3df439c5-utilities\") pod \"redhat-marketplace-7vhhm\" (UID: \"cae6da2c-50d0-460f-b29c-5b3e3df439c5\") " pod="openshift-marketplace/redhat-marketplace-7vhhm" Mar 20 15:46:36 crc kubenswrapper[4730]: I0320 15:46:36.962984 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae6da2c-50d0-460f-b29c-5b3e3df439c5-catalog-content\") pod \"redhat-marketplace-7vhhm\" (UID: \"cae6da2c-50d0-460f-b29c-5b3e3df439c5\") " pod="openshift-marketplace/redhat-marketplace-7vhhm" Mar 20 15:46:36 crc kubenswrapper[4730]: I0320 15:46:36.963363 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae6da2c-50d0-460f-b29c-5b3e3df439c5-utilities\") pod \"redhat-marketplace-7vhhm\" (UID: \"cae6da2c-50d0-460f-b29c-5b3e3df439c5\") " pod="openshift-marketplace/redhat-marketplace-7vhhm" Mar 20 15:46:36 crc kubenswrapper[4730]: I0320 15:46:36.963586 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae6da2c-50d0-460f-b29c-5b3e3df439c5-catalog-content\") pod \"redhat-marketplace-7vhhm\" (UID: \"cae6da2c-50d0-460f-b29c-5b3e3df439c5\") " pod="openshift-marketplace/redhat-marketplace-7vhhm" Mar 20 15:46:36 crc kubenswrapper[4730]: I0320 15:46:36.982841 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqtlx\" (UniqueName: \"kubernetes.io/projected/cae6da2c-50d0-460f-b29c-5b3e3df439c5-kube-api-access-lqtlx\") pod \"redhat-marketplace-7vhhm\" (UID: \"cae6da2c-50d0-460f-b29c-5b3e3df439c5\") " pod="openshift-marketplace/redhat-marketplace-7vhhm" Mar 20 15:46:37 crc kubenswrapper[4730]: I0320 15:46:37.036320 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7vhhm" Mar 20 15:46:37 crc kubenswrapper[4730]: I0320 15:46:37.467564 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7vhhm"] Mar 20 15:46:37 crc kubenswrapper[4730]: W0320 15:46:37.477535 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcae6da2c_50d0_460f_b29c_5b3e3df439c5.slice/crio-37af390bdb5af33c7366157b9007569fb0d12bfaeb477babb66c301c550167b2 WatchSource:0}: Error finding container 37af390bdb5af33c7366157b9007569fb0d12bfaeb477babb66c301c550167b2: Status 404 returned error can't find the container with id 37af390bdb5af33c7366157b9007569fb0d12bfaeb477babb66c301c550167b2 Mar 20 15:46:37 crc kubenswrapper[4730]: I0320 15:46:37.956372 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk6rc" event={"ID":"70d03566-9776-4dcc-84b5-17281f8ae66e","Type":"ContainerStarted","Data":"a02ec7173c2df409bf4c46faf3534e004b5113e620387bf44510db2eb3565d69"} Mar 20 15:46:37 crc kubenswrapper[4730]: I0320 15:46:37.958461 4730 generic.go:334] "Generic (PLEG): container finished" podID="d6e8fab3-7ebb-4b3f-af2c-fcc299e01381" containerID="637bfd25c305324e8a040583ebdd939dd62f4fda7f8a9fc68a08ab56da1cc550" exitCode=0 Mar 20 15:46:37 crc kubenswrapper[4730]: I0320 15:46:37.958642 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rkhd6" event={"ID":"d6e8fab3-7ebb-4b3f-af2c-fcc299e01381","Type":"ContainerDied","Data":"637bfd25c305324e8a040583ebdd939dd62f4fda7f8a9fc68a08ab56da1cc550"} Mar 20 15:46:37 crc kubenswrapper[4730]: I0320 15:46:37.959815 4730 generic.go:334] "Generic (PLEG): container finished" podID="cae6da2c-50d0-460f-b29c-5b3e3df439c5" containerID="30191a20b8e2b8605c55ff9074a4a3e62d06ac6642f2c8bccc6466ce0fcb479b" exitCode=0 Mar 20 15:46:37 crc kubenswrapper[4730]: I0320 15:46:37.959851 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7vhhm" event={"ID":"cae6da2c-50d0-460f-b29c-5b3e3df439c5","Type":"ContainerDied","Data":"30191a20b8e2b8605c55ff9074a4a3e62d06ac6642f2c8bccc6466ce0fcb479b"} Mar 20 15:46:37 crc kubenswrapper[4730]: I0320 15:46:37.959863 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7vhhm" event={"ID":"cae6da2c-50d0-460f-b29c-5b3e3df439c5","Type":"ContainerStarted","Data":"37af390bdb5af33c7366157b9007569fb0d12bfaeb477babb66c301c550167b2"} Mar 20 15:46:37 crc kubenswrapper[4730]: I0320 15:46:37.966975 4730 generic.go:334] "Generic (PLEG): container finished" podID="d8a22a9f-2975-485c-99f7-05e6b934e0a1" containerID="179bf7e47b9d226171d3a00da200d75f48331106eaf22adaada32e89e66497ee" exitCode=0 Mar 20 15:46:37 crc kubenswrapper[4730]: I0320 15:46:37.967024 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbtzz" event={"ID":"d8a22a9f-2975-485c-99f7-05e6b934e0a1","Type":"ContainerDied","Data":"179bf7e47b9d226171d3a00da200d75f48331106eaf22adaada32e89e66497ee"} Mar 20 15:46:37 crc kubenswrapper[4730]: I0320 15:46:37.973498 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vk6rc" podStartSLOduration=2.386061558 podStartE2EDuration="5.973486161s" podCreationTimestamp="2026-03-20 15:46:32 +0000 UTC" firstStartedPulling="2026-03-20 15:46:33.92677269 +0000 UTC m=+453.140144059" lastFinishedPulling="2026-03-20 15:46:37.514197293 +0000 UTC m=+456.727568662" observedRunningTime="2026-03-20 15:46:37.973211443 +0000 UTC m=+457.186582812" watchObservedRunningTime="2026-03-20 15:46:37.973486161 +0000 UTC m=+457.186857520" Mar 20 15:46:38 crc kubenswrapper[4730]: I0320 15:46:38.977202 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rkhd6" event={"ID":"d6e8fab3-7ebb-4b3f-af2c-fcc299e01381","Type":"ContainerStarted","Data":"88c5f5d777fda8e8b64d06a7a00c4fc6f83af7fca08eca4140f037c5ad6fa0d7"} Mar 20 15:46:38 crc kubenswrapper[4730]: I0320 15:46:38.979421 4730 generic.go:334] "Generic (PLEG): container finished" podID="cae6da2c-50d0-460f-b29c-5b3e3df439c5" containerID="7f56571bd135700900c30c39a5d9d179d5a25296048600cd392a928e99d1e34f" exitCode=0 Mar 20 15:46:38 crc kubenswrapper[4730]: I0320 15:46:38.979483 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7vhhm" event={"ID":"cae6da2c-50d0-460f-b29c-5b3e3df439c5","Type":"ContainerDied","Data":"7f56571bd135700900c30c39a5d9d179d5a25296048600cd392a928e99d1e34f"} Mar 20 15:46:38 crc kubenswrapper[4730]: I0320 15:46:38.987520 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbtzz" event={"ID":"d8a22a9f-2975-485c-99f7-05e6b934e0a1","Type":"ContainerStarted","Data":"1ce1392e9cef60e8f158c640ba92127c0e0c1616d90fa5cf171c249de5e2237e"} Mar 20 15:46:39 crc kubenswrapper[4730]: I0320 15:46:39.024508 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bbtzz" podStartSLOduration=2.5453811330000002 podStartE2EDuration="5.02449332s" podCreationTimestamp="2026-03-20 15:46:34 +0000 UTC" firstStartedPulling="2026-03-20 15:46:35.944373724 +0000 UTC m=+455.157745093" lastFinishedPulling="2026-03-20 15:46:38.423485901 +0000 UTC m=+457.636857280" observedRunningTime="2026-03-20 15:46:39.021604935 +0000 UTC m=+458.234976314" watchObservedRunningTime="2026-03-20 15:46:39.02449332 +0000 UTC m=+458.237864689" Mar 20 15:46:39 crc kubenswrapper[4730]: I0320 15:46:39.035968 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rkhd6" podStartSLOduration=2.51083002 podStartE2EDuration="5.035950445s" podCreationTimestamp="2026-03-20 15:46:34 +0000 UTC" firstStartedPulling="2026-03-20 15:46:35.942072915 +0000 UTC m=+455.155444284" lastFinishedPulling="2026-03-20 15:46:38.46719334 +0000 UTC m=+457.680564709" observedRunningTime="2026-03-20 15:46:39.006777811 +0000 UTC m=+458.220149180" watchObservedRunningTime="2026-03-20 15:46:39.035950445 +0000 UTC m=+458.249321814" Mar 20 15:46:39 crc kubenswrapper[4730]: I0320 15:46:39.996461 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7vhhm" event={"ID":"cae6da2c-50d0-460f-b29c-5b3e3df439c5","Type":"ContainerStarted","Data":"f3544d80412c2df30a39271c94de8de8e78751495e2eb74202b2534f5e34b80b"} Mar 20 15:46:40 crc kubenswrapper[4730]: I0320 15:46:40.014977 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7vhhm" podStartSLOduration=2.551161587 podStartE2EDuration="4.014960547s" podCreationTimestamp="2026-03-20 15:46:36 +0000 UTC" firstStartedPulling="2026-03-20 15:46:37.96388724 +0000 UTC m=+457.177258609" lastFinishedPulling="2026-03-20 15:46:39.4276862 +0000 UTC m=+458.641057569" observedRunningTime="2026-03-20 15:46:40.013050132 +0000 UTC m=+459.226421501" watchObservedRunningTime="2026-03-20 15:46:40.014960547 +0000 UTC m=+459.228331916" Mar 20 15:46:42 crc kubenswrapper[4730]: I0320 15:46:42.832479 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vk6rc" Mar 20 15:46:42 crc kubenswrapper[4730]: I0320 15:46:42.833113 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vk6rc" Mar 20 15:46:42 crc kubenswrapper[4730]: I0320 15:46:42.880546 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 15:46:42 crc kubenswrapper[4730]: I0320 15:46:42.880596 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 15:46:43 crc kubenswrapper[4730]: I0320 15:46:43.872946 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vk6rc" podUID="70d03566-9776-4dcc-84b5-17281f8ae66e" containerName="registry-server" probeResult="failure" output=< Mar 20 15:46:43 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Mar 20 15:46:43 crc kubenswrapper[4730]: > Mar 20 15:46:44 crc kubenswrapper[4730]: I0320 15:46:44.642102 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rkhd6" Mar 20 15:46:44 crc kubenswrapper[4730]: I0320 15:46:44.642266 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rkhd6" Mar 20 15:46:44 crc kubenswrapper[4730]: I0320 15:46:44.677312 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rkhd6" Mar 20 15:46:45 crc kubenswrapper[4730]: I0320 15:46:45.059100 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rkhd6" Mar 20 15:46:45 crc kubenswrapper[4730]: I0320 15:46:45.237777 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bbtzz" Mar 20 15:46:45 crc kubenswrapper[4730]: I0320 15:46:45.237821 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bbtzz" Mar 20 15:46:45 crc kubenswrapper[4730]: I0320 15:46:45.278128 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bbtzz" Mar 20 15:46:46 crc kubenswrapper[4730]: I0320 15:46:46.072023 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bbtzz" Mar 20 15:46:47 crc kubenswrapper[4730]: I0320 15:46:47.037353 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7vhhm" Mar 20 15:46:47 crc kubenswrapper[4730]: I0320 15:46:47.037698 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7vhhm" Mar 20 15:46:47 crc kubenswrapper[4730]: I0320 15:46:47.082137 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7vhhm" Mar 20 15:46:48 crc kubenswrapper[4730]: I0320 15:46:48.080820 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7vhhm" Mar 20 15:46:52 crc kubenswrapper[4730]: I0320 15:46:52.894237 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vk6rc" Mar 20 15:46:52 crc kubenswrapper[4730]: I0320 15:46:52.946498 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vk6rc" Mar 20 15:47:00 crc kubenswrapper[4730]: I0320 15:47:00.905588 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" podUID="54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e" containerName="registry" containerID="cri-o://deb22e7f4b1084e112c47fe5c83d1b16157bc080fc85d2ff74bb28b439a9502d" gracePeriod=30 Mar 20 15:47:01 crc kubenswrapper[4730]: I0320 15:47:01.124241 4730 generic.go:334] "Generic (PLEG): container finished" podID="54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e" containerID="deb22e7f4b1084e112c47fe5c83d1b16157bc080fc85d2ff74bb28b439a9502d" exitCode=0 Mar 20 15:47:01 crc kubenswrapper[4730]: I0320 15:47:01.124328 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" event={"ID":"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e","Type":"ContainerDied","Data":"deb22e7f4b1084e112c47fe5c83d1b16157bc080fc85d2ff74bb28b439a9502d"} Mar 20 15:47:01 crc kubenswrapper[4730]: I0320 15:47:01.347103 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:47:01 crc kubenswrapper[4730]: I0320 15:47:01.496935 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-ca-trust-extracted\") pod \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " Mar 20 15:47:01 crc kubenswrapper[4730]: I0320 15:47:01.496995 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-bound-sa-token\") pod \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " Mar 20 15:47:01 crc kubenswrapper[4730]: I0320 15:47:01.497039 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-registry-certificates\") pod \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " Mar 20 15:47:01 crc kubenswrapper[4730]: I0320 15:47:01.497088 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-trusted-ca\") pod \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " Mar 20 15:47:01 crc kubenswrapper[4730]: I0320 15:47:01.497134 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l78n7\" (UniqueName: \"kubernetes.io/projected/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-kube-api-access-l78n7\") pod \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " Mar 20 15:47:01 crc kubenswrapper[4730]: I0320 15:47:01.497161 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-registry-tls\") pod \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " Mar 20 15:47:01 crc kubenswrapper[4730]: I0320 15:47:01.497200 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-installation-pull-secrets\") pod \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " Mar 20 15:47:01 crc kubenswrapper[4730]: I0320 15:47:01.497498 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\" (UID: \"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e\") " Mar 20 15:47:01 crc kubenswrapper[4730]: I0320 15:47:01.498757 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:47:01 crc kubenswrapper[4730]: I0320 15:47:01.499893 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:47:01 crc kubenswrapper[4730]: I0320 15:47:01.508897 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 15:47:01 crc kubenswrapper[4730]: I0320 15:47:01.516483 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:47:01 crc kubenswrapper[4730]: I0320 15:47:01.516659 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:47:01 crc kubenswrapper[4730]: I0320 15:47:01.516875 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-kube-api-access-l78n7" (OuterVolumeSpecName: "kube-api-access-l78n7") pod "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e"). InnerVolumeSpecName "kube-api-access-l78n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:47:01 crc kubenswrapper[4730]: I0320 15:47:01.517158 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:47:01 crc kubenswrapper[4730]: I0320 15:47:01.517723 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e" (UID: "54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:47:01 crc kubenswrapper[4730]: I0320 15:47:01.599091 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:47:01 crc kubenswrapper[4730]: I0320 15:47:01.599135 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l78n7\" (UniqueName: \"kubernetes.io/projected/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-kube-api-access-l78n7\") on node \"crc\" DevicePath \"\"" Mar 20 15:47:01 crc kubenswrapper[4730]: I0320 15:47:01.599148 4730 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 20 15:47:01 crc kubenswrapper[4730]: I0320 15:47:01.599160 4730 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 20 15:47:01 crc kubenswrapper[4730]: I0320 15:47:01.599171 4730 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 20 15:47:01 crc kubenswrapper[4730]: I0320 15:47:01.599183 4730 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 20 15:47:01 crc kubenswrapper[4730]: I0320 15:47:01.599194 4730 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 20 15:47:02 crc kubenswrapper[4730]: I0320 15:47:02.132027 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" event={"ID":"54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e","Type":"ContainerDied","Data":"2585ec9b7501bfbd7ee8fe496b92574cacdd42248f1dc5fb5a3a27e92aa35714"} Mar 20 15:47:02 crc kubenswrapper[4730]: I0320 15:47:02.132082 4730 scope.go:117] "RemoveContainer" containerID="deb22e7f4b1084e112c47fe5c83d1b16157bc080fc85d2ff74bb28b439a9502d" Mar 20 15:47:02 crc kubenswrapper[4730]: I0320 15:47:02.132087 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bdpg6" Mar 20 15:47:02 crc kubenswrapper[4730]: I0320 15:47:02.151481 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bdpg6"] Mar 20 15:47:02 crc kubenswrapper[4730]: I0320 15:47:02.157162 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bdpg6"] Mar 20 15:47:03 crc kubenswrapper[4730]: I0320 15:47:03.541664 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e" path="/var/lib/kubelet/pods/54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e/volumes" Mar 20 15:47:05 crc kubenswrapper[4730]: I0320 15:47:05.837651 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 15:47:12 crc kubenswrapper[4730]: I0320 15:47:12.880097 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 15:47:12 crc kubenswrapper[4730]: I0320 15:47:12.880880 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 15:47:42 crc kubenswrapper[4730]: I0320 15:47:42.880228 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 15:47:42 crc kubenswrapper[4730]: I0320 15:47:42.880806 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 15:47:42 crc kubenswrapper[4730]: I0320 15:47:42.880853 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 15:47:42 crc kubenswrapper[4730]: I0320 15:47:42.881387 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"418b64bd31efa72e03b6036c281348bfc6e1d5be086f3887fe653df9e0316583"} pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 15:47:42 crc kubenswrapper[4730]: I0320 15:47:42.881440 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" containerID="cri-o://418b64bd31efa72e03b6036c281348bfc6e1d5be086f3887fe653df9e0316583" gracePeriod=600 Mar 20 15:47:43 crc kubenswrapper[4730]: I0320 15:47:43.351931 4730 generic.go:334] "Generic (PLEG): container finished" podID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerID="418b64bd31efa72e03b6036c281348bfc6e1d5be086f3887fe653df9e0316583" exitCode=0 Mar 20 15:47:43 crc kubenswrapper[4730]: I0320 15:47:43.351985 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerDied","Data":"418b64bd31efa72e03b6036c281348bfc6e1d5be086f3887fe653df9e0316583"} Mar 20 15:47:43 crc kubenswrapper[4730]: I0320 15:47:43.352336 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerStarted","Data":"44f44ed17252feb14ca678b8fd7bddf96639b37f5ddb8303898a1167aa46bf9c"} Mar 20 15:47:43 crc kubenswrapper[4730]: I0320 15:47:43.352361 4730 scope.go:117] "RemoveContainer" containerID="cd661cda796a2bd61d1446bee672c4471d60370245caf7cb54faf54dfa9c58a0" Mar 20 15:48:00 crc kubenswrapper[4730]: I0320 15:48:00.129645 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567028-s6xcp"] Mar 20 15:48:00 crc kubenswrapper[4730]: E0320 15:48:00.130380 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e" containerName="registry" Mar 20 15:48:00 crc kubenswrapper[4730]: I0320 15:48:00.130394 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e" containerName="registry" Mar 20 15:48:00 crc kubenswrapper[4730]: I0320 15:48:00.130497 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="54319e7c-f09f-4f3d-80cd-8d6dcd4ef88e" containerName="registry" Mar 20 15:48:00 crc kubenswrapper[4730]: I0320 15:48:00.130911 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567028-s6xcp" Mar 20 15:48:00 crc kubenswrapper[4730]: I0320 15:48:00.133422 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 15:48:00 crc kubenswrapper[4730]: I0320 15:48:00.133580 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 15:48:00 crc kubenswrapper[4730]: I0320 15:48:00.133607 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 15:48:00 crc kubenswrapper[4730]: I0320 15:48:00.142968 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567028-s6xcp"] Mar 20 15:48:00 crc kubenswrapper[4730]: I0320 15:48:00.240172 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-578cc\" (UniqueName: \"kubernetes.io/projected/e56ca246-99ac-4397-a499-62738ac94a39-kube-api-access-578cc\") pod \"auto-csr-approver-29567028-s6xcp\" (UID: \"e56ca246-99ac-4397-a499-62738ac94a39\") " pod="openshift-infra/auto-csr-approver-29567028-s6xcp" Mar 20 15:48:00 crc kubenswrapper[4730]: I0320 15:48:00.341791 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-578cc\" (UniqueName: \"kubernetes.io/projected/e56ca246-99ac-4397-a499-62738ac94a39-kube-api-access-578cc\") pod \"auto-csr-approver-29567028-s6xcp\" (UID: \"e56ca246-99ac-4397-a499-62738ac94a39\") " pod="openshift-infra/auto-csr-approver-29567028-s6xcp" Mar 20 15:48:00 crc kubenswrapper[4730]: I0320 15:48:00.360197 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-578cc\" (UniqueName: \"kubernetes.io/projected/e56ca246-99ac-4397-a499-62738ac94a39-kube-api-access-578cc\") pod \"auto-csr-approver-29567028-s6xcp\" (UID: \"e56ca246-99ac-4397-a499-62738ac94a39\") " pod="openshift-infra/auto-csr-approver-29567028-s6xcp" Mar 20 15:48:00 crc kubenswrapper[4730]: I0320 15:48:00.448155 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567028-s6xcp" Mar 20 15:48:01 crc kubenswrapper[4730]: I0320 15:48:01.488165 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567028-s6xcp"] Mar 20 15:48:01 crc kubenswrapper[4730]: I0320 15:48:01.495936 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 15:48:02 crc kubenswrapper[4730]: I0320 15:48:02.454178 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567028-s6xcp" event={"ID":"e56ca246-99ac-4397-a499-62738ac94a39","Type":"ContainerStarted","Data":"5141a5c6a1cf5183972f40437d284ec8b2b455a48c23c168fde813932fc800eb"} Mar 20 15:48:03 crc kubenswrapper[4730]: I0320 15:48:03.462554 4730 generic.go:334] "Generic (PLEG): container finished" podID="e56ca246-99ac-4397-a499-62738ac94a39" containerID="44d76ae85c164cacb0e0982473fd32dc59c0d37d2af3868ef4b22b1a51c8b024" exitCode=0 Mar 20 15:48:03 crc kubenswrapper[4730]: I0320 15:48:03.462666 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567028-s6xcp" event={"ID":"e56ca246-99ac-4397-a499-62738ac94a39","Type":"ContainerDied","Data":"44d76ae85c164cacb0e0982473fd32dc59c0d37d2af3868ef4b22b1a51c8b024"} Mar 20 15:48:04 crc kubenswrapper[4730]: I0320 15:48:04.718281 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567028-s6xcp" Mar 20 15:48:04 crc kubenswrapper[4730]: I0320 15:48:04.798801 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-578cc\" (UniqueName: \"kubernetes.io/projected/e56ca246-99ac-4397-a499-62738ac94a39-kube-api-access-578cc\") pod \"e56ca246-99ac-4397-a499-62738ac94a39\" (UID: \"e56ca246-99ac-4397-a499-62738ac94a39\") " Mar 20 15:48:04 crc kubenswrapper[4730]: I0320 15:48:04.804437 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e56ca246-99ac-4397-a499-62738ac94a39-kube-api-access-578cc" (OuterVolumeSpecName: "kube-api-access-578cc") pod "e56ca246-99ac-4397-a499-62738ac94a39" (UID: "e56ca246-99ac-4397-a499-62738ac94a39"). InnerVolumeSpecName "kube-api-access-578cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:48:04 crc kubenswrapper[4730]: I0320 15:48:04.900658 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-578cc\" (UniqueName: \"kubernetes.io/projected/e56ca246-99ac-4397-a499-62738ac94a39-kube-api-access-578cc\") on node \"crc\" DevicePath \"\"" Mar 20 15:48:05 crc kubenswrapper[4730]: I0320 15:48:05.475380 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567028-s6xcp" event={"ID":"e56ca246-99ac-4397-a499-62738ac94a39","Type":"ContainerDied","Data":"5141a5c6a1cf5183972f40437d284ec8b2b455a48c23c168fde813932fc800eb"} Mar 20 15:48:05 crc kubenswrapper[4730]: I0320 15:48:05.475418 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5141a5c6a1cf5183972f40437d284ec8b2b455a48c23c168fde813932fc800eb" Mar 20 15:48:05 crc kubenswrapper[4730]: I0320 15:48:05.475466 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567028-s6xcp" Mar 20 15:48:05 crc kubenswrapper[4730]: I0320 15:48:05.777648 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567022-wf5nv"] Mar 20 15:48:05 crc kubenswrapper[4730]: I0320 15:48:05.781340 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567022-wf5nv"] Mar 20 15:48:07 crc kubenswrapper[4730]: I0320 15:48:07.541266 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d87adfe-3206-4175-8d8f-5a00015cc61e" path="/var/lib/kubelet/pods/7d87adfe-3206-4175-8d8f-5a00015cc61e/volumes" Mar 20 15:50:00 crc kubenswrapper[4730]: I0320 15:50:00.142107 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567030-pwdln"] Mar 20 15:50:00 crc kubenswrapper[4730]: E0320 15:50:00.142821 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56ca246-99ac-4397-a499-62738ac94a39" containerName="oc" Mar 20 15:50:00 crc kubenswrapper[4730]: I0320 15:50:00.142832 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56ca246-99ac-4397-a499-62738ac94a39" containerName="oc" Mar 20 15:50:00 crc kubenswrapper[4730]: I0320 15:50:00.142932 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="e56ca246-99ac-4397-a499-62738ac94a39" containerName="oc" Mar 20 15:50:00 crc kubenswrapper[4730]: I0320 15:50:00.143341 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567030-pwdln" Mar 20 15:50:00 crc kubenswrapper[4730]: I0320 15:50:00.144873 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 15:50:00 crc kubenswrapper[4730]: I0320 15:50:00.145300 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 15:50:00 crc kubenswrapper[4730]: I0320 15:50:00.145531 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8cck\" (UniqueName: \"kubernetes.io/projected/b7dcd73b-be94-4b96-b001-593d2fd56aa3-kube-api-access-d8cck\") pod \"auto-csr-approver-29567030-pwdln\" (UID: \"b7dcd73b-be94-4b96-b001-593d2fd56aa3\") " pod="openshift-infra/auto-csr-approver-29567030-pwdln" Mar 20 15:50:00 crc kubenswrapper[4730]: I0320 15:50:00.145710 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 15:50:00 crc kubenswrapper[4730]: I0320 15:50:00.151327 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567030-pwdln"] Mar 20 15:50:00 crc kubenswrapper[4730]: I0320 15:50:00.246522 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8cck\" (UniqueName: \"kubernetes.io/projected/b7dcd73b-be94-4b96-b001-593d2fd56aa3-kube-api-access-d8cck\") pod \"auto-csr-approver-29567030-pwdln\" (UID: \"b7dcd73b-be94-4b96-b001-593d2fd56aa3\") " pod="openshift-infra/auto-csr-approver-29567030-pwdln" Mar 20 15:50:00 crc kubenswrapper[4730]: I0320 15:50:00.276437 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8cck\" (UniqueName: \"kubernetes.io/projected/b7dcd73b-be94-4b96-b001-593d2fd56aa3-kube-api-access-d8cck\") pod \"auto-csr-approver-29567030-pwdln\" (UID: \"b7dcd73b-be94-4b96-b001-593d2fd56aa3\") " pod="openshift-infra/auto-csr-approver-29567030-pwdln" Mar 20 15:50:00 crc kubenswrapper[4730]: I0320 15:50:00.459556 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567030-pwdln" Mar 20 15:50:00 crc kubenswrapper[4730]: I0320 15:50:00.637866 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567030-pwdln"] Mar 20 15:50:01 crc kubenswrapper[4730]: I0320 15:50:01.184466 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567030-pwdln" event={"ID":"b7dcd73b-be94-4b96-b001-593d2fd56aa3","Type":"ContainerStarted","Data":"1e4947204f122f48f44729439abc35d708180684d597470f1ef2ebb11d1aef36"} Mar 20 15:50:03 crc kubenswrapper[4730]: I0320 15:50:03.200049 4730 generic.go:334] "Generic (PLEG): container finished" podID="b7dcd73b-be94-4b96-b001-593d2fd56aa3" containerID="db53fcef559ab1b37329ca537473be13177cc4e3055c12b3c5b8536921ff4616" exitCode=0 Mar 20 15:50:03 crc kubenswrapper[4730]: I0320 15:50:03.200140 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567030-pwdln" event={"ID":"b7dcd73b-be94-4b96-b001-593d2fd56aa3","Type":"ContainerDied","Data":"db53fcef559ab1b37329ca537473be13177cc4e3055c12b3c5b8536921ff4616"} Mar 20 15:50:04 crc kubenswrapper[4730]: I0320 15:50:04.393211 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567030-pwdln" Mar 20 15:50:04 crc kubenswrapper[4730]: I0320 15:50:04.593854 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8cck\" (UniqueName: \"kubernetes.io/projected/b7dcd73b-be94-4b96-b001-593d2fd56aa3-kube-api-access-d8cck\") pod \"b7dcd73b-be94-4b96-b001-593d2fd56aa3\" (UID: \"b7dcd73b-be94-4b96-b001-593d2fd56aa3\") " Mar 20 15:50:04 crc kubenswrapper[4730]: I0320 15:50:04.600453 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7dcd73b-be94-4b96-b001-593d2fd56aa3-kube-api-access-d8cck" (OuterVolumeSpecName: "kube-api-access-d8cck") pod "b7dcd73b-be94-4b96-b001-593d2fd56aa3" (UID: "b7dcd73b-be94-4b96-b001-593d2fd56aa3"). InnerVolumeSpecName "kube-api-access-d8cck". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:50:04 crc kubenswrapper[4730]: I0320 15:50:04.694750 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8cck\" (UniqueName: \"kubernetes.io/projected/b7dcd73b-be94-4b96-b001-593d2fd56aa3-kube-api-access-d8cck\") on node \"crc\" DevicePath \"\"" Mar 20 15:50:05 crc kubenswrapper[4730]: I0320 15:50:05.212054 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567030-pwdln" event={"ID":"b7dcd73b-be94-4b96-b001-593d2fd56aa3","Type":"ContainerDied","Data":"1e4947204f122f48f44729439abc35d708180684d597470f1ef2ebb11d1aef36"} Mar 20 15:50:05 crc kubenswrapper[4730]: I0320 15:50:05.212086 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567030-pwdln" Mar 20 15:50:05 crc kubenswrapper[4730]: I0320 15:50:05.212091 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e4947204f122f48f44729439abc35d708180684d597470f1ef2ebb11d1aef36" Mar 20 15:50:05 crc kubenswrapper[4730]: I0320 15:50:05.441685 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567024-s2r9c"] Mar 20 15:50:05 crc kubenswrapper[4730]: I0320 15:50:05.444706 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567024-s2r9c"] Mar 20 15:50:05 crc kubenswrapper[4730]: I0320 15:50:05.541024 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f093381-3bf4-49ff-beb4-f44aa012c521" path="/var/lib/kubelet/pods/3f093381-3bf4-49ff-beb4-f44aa012c521/volumes" Mar 20 15:50:12 crc kubenswrapper[4730]: I0320 15:50:12.880504 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 15:50:12 crc kubenswrapper[4730]: I0320 15:50:12.881076 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 15:50:28 crc kubenswrapper[4730]: I0320 15:50:28.212733 4730 scope.go:117] "RemoveContainer" containerID="bb5b04ddf5d3880ba3c77fa4e7069bd85e272160b1890e28a7de00d43e3a9f9e" Mar 20 15:50:28 crc kubenswrapper[4730]: I0320 15:50:28.251729 4730 scope.go:117] "RemoveContainer" containerID="6ba1acd4b6440038c4d2f11f36de1734bab2b24cdd1e2d4018cd0e97b421d598" Mar 20 15:50:42 crc kubenswrapper[4730]: I0320 15:50:42.880717 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 15:50:42 crc kubenswrapper[4730]: I0320 15:50:42.881277 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 15:51:12 crc kubenswrapper[4730]: I0320 15:51:12.880275 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 15:51:12 crc kubenswrapper[4730]: I0320 15:51:12.882082 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 15:51:12 crc kubenswrapper[4730]: I0320 15:51:12.882137 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 15:51:12 crc kubenswrapper[4730]: I0320 15:51:12.882863 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"44f44ed17252feb14ca678b8fd7bddf96639b37f5ddb8303898a1167aa46bf9c"} pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 15:51:12 crc kubenswrapper[4730]: I0320 15:51:12.882915 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" containerID="cri-o://44f44ed17252feb14ca678b8fd7bddf96639b37f5ddb8303898a1167aa46bf9c" gracePeriod=600 Mar 20 15:51:13 crc kubenswrapper[4730]: I0320 15:51:13.615043 4730 generic.go:334] "Generic (PLEG): container finished" podID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerID="44f44ed17252feb14ca678b8fd7bddf96639b37f5ddb8303898a1167aa46bf9c" exitCode=0 Mar 20 15:51:13 crc kubenswrapper[4730]: I0320 15:51:13.615155 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerDied","Data":"44f44ed17252feb14ca678b8fd7bddf96639b37f5ddb8303898a1167aa46bf9c"} Mar 20 15:51:13 crc kubenswrapper[4730]: I0320 15:51:13.615608 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerStarted","Data":"4969adb306e949f48cbf48ac9e1452830c3458afd1750aa781060e2cc0952393"} Mar 20 15:51:13 crc kubenswrapper[4730]: I0320 15:51:13.615633 4730 scope.go:117] "RemoveContainer" containerID="418b64bd31efa72e03b6036c281348bfc6e1d5be086f3887fe653df9e0316583" Mar 20 15:51:45 crc kubenswrapper[4730]: I0320 15:51:45.816764 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-dwg9x"] Mar 20 15:51:45 crc kubenswrapper[4730]: E0320 15:51:45.817542 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7dcd73b-be94-4b96-b001-593d2fd56aa3" containerName="oc" Mar 20 15:51:45 crc kubenswrapper[4730]: I0320 15:51:45.817554 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7dcd73b-be94-4b96-b001-593d2fd56aa3" containerName="oc" Mar 20 15:51:45 crc kubenswrapper[4730]: I0320 15:51:45.818097 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7dcd73b-be94-4b96-b001-593d2fd56aa3" containerName="oc" Mar 20 15:51:45 crc kubenswrapper[4730]: I0320 15:51:45.820741 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-dwg9x" Mar 20 15:51:45 crc kubenswrapper[4730]: I0320 15:51:45.823720 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-89r9d"] Mar 20 15:51:45 crc kubenswrapper[4730]: I0320 15:51:45.824602 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-89r9d" Mar 20 15:51:45 crc kubenswrapper[4730]: I0320 15:51:45.841118 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 20 15:51:45 crc kubenswrapper[4730]: I0320 15:51:45.841348 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 20 15:51:45 crc kubenswrapper[4730]: I0320 15:51:45.842960 4730 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-q7m7b" Mar 20 15:51:45 crc kubenswrapper[4730]: I0320 15:51:45.843398 4730 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-m9shs" Mar 20 15:51:45 crc kubenswrapper[4730]: I0320 15:51:45.849634 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-dwg9x"] Mar 20 15:51:45 crc kubenswrapper[4730]: I0320 15:51:45.853196 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-89r9d"] Mar 20 15:51:45 crc kubenswrapper[4730]: I0320 15:51:45.869574 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-qcz52"] Mar 20 15:51:45 crc kubenswrapper[4730]: I0320 15:51:45.870335 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-qcz52" Mar 20 15:51:45 crc kubenswrapper[4730]: I0320 15:51:45.872310 4730 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-wg4bg" Mar 20 15:51:45 crc kubenswrapper[4730]: I0320 15:51:45.874344 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-qcz52"] Mar 20 15:51:45 crc kubenswrapper[4730]: I0320 15:51:45.904859 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9t96\" (UniqueName: \"kubernetes.io/projected/096957e4-5a35-42f7-adf0-cac7672589a4-kube-api-access-s9t96\") pod \"cert-manager-858654f9db-dwg9x\" (UID: \"096957e4-5a35-42f7-adf0-cac7672589a4\") " pod="cert-manager/cert-manager-858654f9db-dwg9x" Mar 20 15:51:45 crc kubenswrapper[4730]: I0320 15:51:45.904980 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7rfm\" (UniqueName: \"kubernetes.io/projected/b59581d5-071c-4764-9ef6-50ea4724e0a6-kube-api-access-d7rfm\") pod \"cert-manager-cainjector-cf98fcc89-89r9d\" (UID: \"b59581d5-071c-4764-9ef6-50ea4724e0a6\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-89r9d" Mar 20 15:51:45 crc kubenswrapper[4730]: I0320 15:51:45.905019 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms66r\" (UniqueName: \"kubernetes.io/projected/e7c6b209-7bad-4eb0-b8d0-61a602be9b89-kube-api-access-ms66r\") pod \"cert-manager-webhook-687f57d79b-qcz52\" (UID: \"e7c6b209-7bad-4eb0-b8d0-61a602be9b89\") " pod="cert-manager/cert-manager-webhook-687f57d79b-qcz52" Mar 20 15:51:46 crc kubenswrapper[4730]: I0320 15:51:46.005782 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9t96\" (UniqueName: \"kubernetes.io/projected/096957e4-5a35-42f7-adf0-cac7672589a4-kube-api-access-s9t96\") pod \"cert-manager-858654f9db-dwg9x\" (UID: \"096957e4-5a35-42f7-adf0-cac7672589a4\") " pod="cert-manager/cert-manager-858654f9db-dwg9x" Mar 20 15:51:46 crc kubenswrapper[4730]: I0320 15:51:46.005847 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7rfm\" (UniqueName: \"kubernetes.io/projected/b59581d5-071c-4764-9ef6-50ea4724e0a6-kube-api-access-d7rfm\") pod \"cert-manager-cainjector-cf98fcc89-89r9d\" (UID: \"b59581d5-071c-4764-9ef6-50ea4724e0a6\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-89r9d" Mar 20 15:51:46 crc kubenswrapper[4730]: I0320 15:51:46.005876 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms66r\" (UniqueName: \"kubernetes.io/projected/e7c6b209-7bad-4eb0-b8d0-61a602be9b89-kube-api-access-ms66r\") pod \"cert-manager-webhook-687f57d79b-qcz52\" (UID: \"e7c6b209-7bad-4eb0-b8d0-61a602be9b89\") " pod="cert-manager/cert-manager-webhook-687f57d79b-qcz52" Mar 20 15:51:46 crc kubenswrapper[4730]: I0320 15:51:46.025166 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms66r\" (UniqueName: \"kubernetes.io/projected/e7c6b209-7bad-4eb0-b8d0-61a602be9b89-kube-api-access-ms66r\") pod \"cert-manager-webhook-687f57d79b-qcz52\" (UID: \"e7c6b209-7bad-4eb0-b8d0-61a602be9b89\") " pod="cert-manager/cert-manager-webhook-687f57d79b-qcz52" Mar 20 15:51:46 crc kubenswrapper[4730]: I0320 15:51:46.026773 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7rfm\" (UniqueName: \"kubernetes.io/projected/b59581d5-071c-4764-9ef6-50ea4724e0a6-kube-api-access-d7rfm\") pod \"cert-manager-cainjector-cf98fcc89-89r9d\" (UID: \"b59581d5-071c-4764-9ef6-50ea4724e0a6\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-89r9d" Mar 20 15:51:46 crc kubenswrapper[4730]: I0320 15:51:46.028555 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9t96\" (UniqueName: \"kubernetes.io/projected/096957e4-5a35-42f7-adf0-cac7672589a4-kube-api-access-s9t96\") pod \"cert-manager-858654f9db-dwg9x\" (UID: \"096957e4-5a35-42f7-adf0-cac7672589a4\") " pod="cert-manager/cert-manager-858654f9db-dwg9x" Mar 20 15:51:46 crc kubenswrapper[4730]: I0320 15:51:46.155611 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-dwg9x" Mar 20 15:51:46 crc kubenswrapper[4730]: I0320 15:51:46.169985 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-89r9d" Mar 20 15:51:46 crc kubenswrapper[4730]: I0320 15:51:46.184839 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-qcz52" Mar 20 15:51:46 crc kubenswrapper[4730]: I0320 15:51:46.414081 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-dwg9x"] Mar 20 15:51:46 crc kubenswrapper[4730]: I0320 15:51:46.445938 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-89r9d"] Mar 20 15:51:46 crc kubenswrapper[4730]: W0320 15:51:46.451156 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb59581d5_071c_4764_9ef6_50ea4724e0a6.slice/crio-d11570b1056c2b825ceaefd7e2f85c8dafee33ffe1d2fdee038e11a1e280a3c8 WatchSource:0}: Error finding container d11570b1056c2b825ceaefd7e2f85c8dafee33ffe1d2fdee038e11a1e280a3c8: Status 404 returned error can't find the container with id d11570b1056c2b825ceaefd7e2f85c8dafee33ffe1d2fdee038e11a1e280a3c8 Mar 20 15:51:46 crc kubenswrapper[4730]: I0320 15:51:46.692270 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-qcz52"] Mar 20 15:51:46 crc kubenswrapper[4730]: W0320 15:51:46.695196 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7c6b209_7bad_4eb0_b8d0_61a602be9b89.slice/crio-9e4dfe5d7fed48bf606dc19a5dea601ea0a47571018fefdbe959bc2fafaab550 WatchSource:0}: Error finding container 9e4dfe5d7fed48bf606dc19a5dea601ea0a47571018fefdbe959bc2fafaab550: Status 404 returned error can't find the container with id 9e4dfe5d7fed48bf606dc19a5dea601ea0a47571018fefdbe959bc2fafaab550 Mar 20 15:51:46 crc kubenswrapper[4730]: I0320 15:51:46.805998 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-dwg9x" event={"ID":"096957e4-5a35-42f7-adf0-cac7672589a4","Type":"ContainerStarted","Data":"c59a935482eb9ff1774c6387b00890ba9f481b157706bdf808afd32ad2202efc"} Mar 20 15:51:46 crc kubenswrapper[4730]: I0320 15:51:46.807564 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-qcz52" event={"ID":"e7c6b209-7bad-4eb0-b8d0-61a602be9b89","Type":"ContainerStarted","Data":"9e4dfe5d7fed48bf606dc19a5dea601ea0a47571018fefdbe959bc2fafaab550"} Mar 20 15:51:46 crc kubenswrapper[4730]: I0320 15:51:46.808816 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-89r9d" event={"ID":"b59581d5-071c-4764-9ef6-50ea4724e0a6","Type":"ContainerStarted","Data":"d11570b1056c2b825ceaefd7e2f85c8dafee33ffe1d2fdee038e11a1e280a3c8"} Mar 20 15:51:49 crc kubenswrapper[4730]: I0320 15:51:49.824680 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-qcz52" event={"ID":"e7c6b209-7bad-4eb0-b8d0-61a602be9b89","Type":"ContainerStarted","Data":"bea517557beebe1c6e1691c7e382b6d8fcb55218b38c9f69540e207508db5d57"} Mar 20 15:51:49 crc kubenswrapper[4730]: I0320 15:51:49.825290 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-qcz52" Mar 20 15:51:49 crc kubenswrapper[4730]: I0320 15:51:49.847274 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-qcz52" podStartSLOduration=2.733959531 podStartE2EDuration="4.847239206s" podCreationTimestamp="2026-03-20 15:51:45 +0000 UTC" firstStartedPulling="2026-03-20 15:51:46.697675666 +0000 UTC m=+765.911047045" lastFinishedPulling="2026-03-20 15:51:48.810955341 +0000 UTC m=+768.024326720" observedRunningTime="2026-03-20 15:51:49.840534934 +0000 UTC m=+769.053906303" watchObservedRunningTime="2026-03-20 15:51:49.847239206 +0000 UTC m=+769.060610575" Mar 20 15:51:50 crc kubenswrapper[4730]: I0320 15:51:50.835001 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-89r9d" event={"ID":"b59581d5-071c-4764-9ef6-50ea4724e0a6","Type":"ContainerStarted","Data":"599e9bd6946d3e853b45c14376407263951b971834c819015e7d19ed7c52ba7c"} Mar 20 15:51:50 crc kubenswrapper[4730]: I0320 15:51:50.838702 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-dwg9x" event={"ID":"096957e4-5a35-42f7-adf0-cac7672589a4","Type":"ContainerStarted","Data":"aadceb6e1a5ca740041d23022aa81e2db5c6c81b12d873b540355ae4258b69e2"} Mar 20 15:51:50 crc kubenswrapper[4730]: I0320 15:51:50.859166 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-89r9d" podStartSLOduration=2.235173188 podStartE2EDuration="5.859149963s" podCreationTimestamp="2026-03-20 15:51:45 +0000 UTC" firstStartedPulling="2026-03-20 15:51:46.452955629 +0000 UTC m=+765.666326998" lastFinishedPulling="2026-03-20 15:51:50.076932404 +0000 UTC m=+769.290303773" observedRunningTime="2026-03-20 15:51:50.857303491 +0000 UTC m=+770.070674870" watchObservedRunningTime="2026-03-20 15:51:50.859149963 +0000 UTC m=+770.072521332" Mar 20 15:51:50 crc kubenswrapper[4730]: I0320 15:51:50.881628 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-dwg9x" podStartSLOduration=2.170115998 podStartE2EDuration="5.881597135s" podCreationTimestamp="2026-03-20 15:51:45 +0000 UTC" firstStartedPulling="2026-03-20 15:51:46.413672896 +0000 UTC m=+765.627044265" lastFinishedPulling="2026-03-20 15:51:50.125154033 +0000 UTC m=+769.338525402" observedRunningTime="2026-03-20 15:51:50.880327239 +0000 UTC m=+770.093698648" watchObservedRunningTime="2026-03-20 15:51:50.881597135 +0000 UTC m=+770.094968564" Mar 20 15:51:54 crc kubenswrapper[4730]: I0320 15:51:54.885548 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qj97f"] Mar 20 15:51:54 crc kubenswrapper[4730]: I0320 15:51:54.887439 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="ovn-controller" containerID="cri-o://31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c" gracePeriod=30 Mar 20 15:51:54 crc kubenswrapper[4730]: I0320 15:51:54.887517 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="nbdb" containerID="cri-o://d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01" gracePeriod=30 Mar 20 15:51:54 crc kubenswrapper[4730]: I0320 15:51:54.887608 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="northd" containerID="cri-o://462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c" gracePeriod=30 Mar 20 15:51:54 crc kubenswrapper[4730]: I0320 15:51:54.887723 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="sbdb" containerID="cri-o://43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f" gracePeriod=30 Mar 20 15:51:54 crc kubenswrapper[4730]: I0320 15:51:54.887741 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="ovn-acl-logging" containerID="cri-o://b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db" gracePeriod=30 Mar 20 15:51:54 crc kubenswrapper[4730]: I0320 15:51:54.887810 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d" gracePeriod=30 Mar 20 15:51:54 crc kubenswrapper[4730]: I0320 15:51:54.887677 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="kube-rbac-proxy-node" containerID="cri-o://e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91" gracePeriod=30 Mar 20 15:51:54 crc kubenswrapper[4730]: I0320 15:51:54.930844 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="ovnkube-controller" containerID="cri-o://35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971" gracePeriod=30 Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.228327 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj97f_c4b4e0e8-af33-491e-b1d1-31079d90c656/ovnkube-controller/3.log" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.230478 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj97f_c4b4e0e8-af33-491e-b1d1-31079d90c656/ovn-acl-logging/0.log" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.230957 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj97f_c4b4e0e8-af33-491e-b1d1-31079d90c656/ovn-controller/0.log" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.231411 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.248051 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c4b4e0e8-af33-491e-b1d1-31079d90c656-env-overrides\") pod \"c4b4e0e8-af33-491e-b1d1-31079d90c656\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.248087 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-log-socket\") pod \"c4b4e0e8-af33-491e-b1d1-31079d90c656\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.248112 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-cni-bin\") pod \"c4b4e0e8-af33-491e-b1d1-31079d90c656\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.248155 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-cni-netd\") pod \"c4b4e0e8-af33-491e-b1d1-31079d90c656\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.248176 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz64b\" (UniqueName: \"kubernetes.io/projected/c4b4e0e8-af33-491e-b1d1-31079d90c656-kube-api-access-mz64b\") pod \"c4b4e0e8-af33-491e-b1d1-31079d90c656\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.248191 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-node-log\") pod \"c4b4e0e8-af33-491e-b1d1-31079d90c656\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.248211 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-run-netns\") pod \"c4b4e0e8-af33-491e-b1d1-31079d90c656\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.248234 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-systemd-units\") pod \"c4b4e0e8-af33-491e-b1d1-31079d90c656\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.248275 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-run-systemd\") pod \"c4b4e0e8-af33-491e-b1d1-31079d90c656\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.248300 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c4b4e0e8-af33-491e-b1d1-31079d90c656-ovn-node-metrics-cert\") pod \"c4b4e0e8-af33-491e-b1d1-31079d90c656\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.248331 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-slash\") pod \"c4b4e0e8-af33-491e-b1d1-31079d90c656\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.248351 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-run-ovn-kubernetes\") pod \"c4b4e0e8-af33-491e-b1d1-31079d90c656\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.248364 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-run-openvswitch\") pod \"c4b4e0e8-af33-491e-b1d1-31079d90c656\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.248404 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-run-ovn\") pod \"c4b4e0e8-af33-491e-b1d1-31079d90c656\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.248418 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-var-lib-cni-networks-ovn-kubernetes\") pod \"c4b4e0e8-af33-491e-b1d1-31079d90c656\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.248436 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c4b4e0e8-af33-491e-b1d1-31079d90c656-ovnkube-script-lib\") pod \"c4b4e0e8-af33-491e-b1d1-31079d90c656\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.248449 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-etc-openvswitch\") pod \"c4b4e0e8-af33-491e-b1d1-31079d90c656\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.248465 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-kubelet\") pod \"c4b4e0e8-af33-491e-b1d1-31079d90c656\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.248481 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-var-lib-openvswitch\") pod \"c4b4e0e8-af33-491e-b1d1-31079d90c656\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.248512 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c4b4e0e8-af33-491e-b1d1-31079d90c656-ovnkube-config\") pod \"c4b4e0e8-af33-491e-b1d1-31079d90c656\" (UID: \"c4b4e0e8-af33-491e-b1d1-31079d90c656\") " Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.248679 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "c4b4e0e8-af33-491e-b1d1-31079d90c656" (UID: "c4b4e0e8-af33-491e-b1d1-31079d90c656"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.248802 4730 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.248831 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-slash" (OuterVolumeSpecName: "host-slash") pod "c4b4e0e8-af33-491e-b1d1-31079d90c656" (UID: "c4b4e0e8-af33-491e-b1d1-31079d90c656"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.248854 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "c4b4e0e8-af33-491e-b1d1-31079d90c656" (UID: "c4b4e0e8-af33-491e-b1d1-31079d90c656"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.248873 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "c4b4e0e8-af33-491e-b1d1-31079d90c656" (UID: "c4b4e0e8-af33-491e-b1d1-31079d90c656"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.248892 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "c4b4e0e8-af33-491e-b1d1-31079d90c656" (UID: "c4b4e0e8-af33-491e-b1d1-31079d90c656"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.249120 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4b4e0e8-af33-491e-b1d1-31079d90c656-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "c4b4e0e8-af33-491e-b1d1-31079d90c656" (UID: "c4b4e0e8-af33-491e-b1d1-31079d90c656"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.249159 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "c4b4e0e8-af33-491e-b1d1-31079d90c656" (UID: "c4b4e0e8-af33-491e-b1d1-31079d90c656"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.249179 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-log-socket" (OuterVolumeSpecName: "log-socket") pod "c4b4e0e8-af33-491e-b1d1-31079d90c656" (UID: "c4b4e0e8-af33-491e-b1d1-31079d90c656"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.249204 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "c4b4e0e8-af33-491e-b1d1-31079d90c656" (UID: "c4b4e0e8-af33-491e-b1d1-31079d90c656"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.249194 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "c4b4e0e8-af33-491e-b1d1-31079d90c656" (UID: "c4b4e0e8-af33-491e-b1d1-31079d90c656"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.249221 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-node-log" (OuterVolumeSpecName: "node-log") pod "c4b4e0e8-af33-491e-b1d1-31079d90c656" (UID: "c4b4e0e8-af33-491e-b1d1-31079d90c656"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.249241 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "c4b4e0e8-af33-491e-b1d1-31079d90c656" (UID: "c4b4e0e8-af33-491e-b1d1-31079d90c656"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.249454 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "c4b4e0e8-af33-491e-b1d1-31079d90c656" (UID: "c4b4e0e8-af33-491e-b1d1-31079d90c656"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.249456 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "c4b4e0e8-af33-491e-b1d1-31079d90c656" (UID: "c4b4e0e8-af33-491e-b1d1-31079d90c656"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.249524 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4b4e0e8-af33-491e-b1d1-31079d90c656-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "c4b4e0e8-af33-491e-b1d1-31079d90c656" (UID: "c4b4e0e8-af33-491e-b1d1-31079d90c656"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.249574 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "c4b4e0e8-af33-491e-b1d1-31079d90c656" (UID: "c4b4e0e8-af33-491e-b1d1-31079d90c656"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.249592 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4b4e0e8-af33-491e-b1d1-31079d90c656-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "c4b4e0e8-af33-491e-b1d1-31079d90c656" (UID: "c4b4e0e8-af33-491e-b1d1-31079d90c656"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.254398 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4b4e0e8-af33-491e-b1d1-31079d90c656-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "c4b4e0e8-af33-491e-b1d1-31079d90c656" (UID: "c4b4e0e8-af33-491e-b1d1-31079d90c656"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.254772 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4b4e0e8-af33-491e-b1d1-31079d90c656-kube-api-access-mz64b" (OuterVolumeSpecName: "kube-api-access-mz64b") pod "c4b4e0e8-af33-491e-b1d1-31079d90c656" (UID: "c4b4e0e8-af33-491e-b1d1-31079d90c656"). InnerVolumeSpecName "kube-api-access-mz64b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.267028 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "c4b4e0e8-af33-491e-b1d1-31079d90c656" (UID: "c4b4e0e8-af33-491e-b1d1-31079d90c656"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.282594 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-flvb5"] Mar 20 15:51:55 crc kubenswrapper[4730]: E0320 15:51:55.282831 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="kube-rbac-proxy-node" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.282847 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="kube-rbac-proxy-node" Mar 20 15:51:55 crc kubenswrapper[4730]: E0320 15:51:55.282859 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="nbdb" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.282865 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="nbdb" Mar 20 15:51:55 crc kubenswrapper[4730]: E0320 15:51:55.282873 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="ovnkube-controller" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.282879 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="ovnkube-controller" Mar 20 15:51:55 crc kubenswrapper[4730]: E0320 15:51:55.282886 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="northd" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.282892 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="northd" Mar 20 15:51:55 crc kubenswrapper[4730]: E0320 15:51:55.282900 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="ovnkube-controller" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.282906 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="ovnkube-controller" Mar 20 15:51:55 crc kubenswrapper[4730]: E0320 15:51:55.282916 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="ovnkube-controller" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.282921 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="ovnkube-controller" Mar 20 15:51:55 crc kubenswrapper[4730]: E0320 15:51:55.282930 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="sbdb" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.282935 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="sbdb" Mar 20 15:51:55 crc kubenswrapper[4730]: E0320 15:51:55.282943 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="ovnkube-controller" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.282948 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="ovnkube-controller" Mar 20 15:51:55 crc kubenswrapper[4730]: E0320 15:51:55.282959 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="ovn-controller" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.282964 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="ovn-controller" Mar 20 15:51:55 crc kubenswrapper[4730]: E0320 15:51:55.282975 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="ovn-acl-logging" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.282980 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="ovn-acl-logging" Mar 20 15:51:55 crc kubenswrapper[4730]: E0320 15:51:55.282989 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="kubecfg-setup" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.282996 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="kubecfg-setup" Mar 20 15:51:55 crc kubenswrapper[4730]: E0320 15:51:55.283003 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="kube-rbac-proxy-ovn-metrics" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.283008 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="kube-rbac-proxy-ovn-metrics" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.283110 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="kube-rbac-proxy-ovn-metrics" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.283121 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="ovn-controller" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.283128 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="nbdb" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.283136 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="sbdb" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.283145 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="ovn-acl-logging" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.283152 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="ovnkube-controller" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.283160 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="kube-rbac-proxy-node" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.283169 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="ovnkube-controller" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.283176 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="ovnkube-controller" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.283184 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="northd" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.283191 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="ovnkube-controller" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.283197 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="ovnkube-controller" Mar 20 15:51:55 crc kubenswrapper[4730]: E0320 15:51:55.284559 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="ovnkube-controller" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.284582 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerName="ovnkube-controller" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.286353 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.349580 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-ovn-node-metrics-cert\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.349632 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-host-kubelet\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.349704 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcbxp\" (UniqueName: \"kubernetes.io/projected/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-kube-api-access-gcbxp\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.349813 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.349870 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-var-lib-openvswitch\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.349891 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-ovnkube-config\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.349910 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-host-run-ovn-kubernetes\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.349939 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-host-cni-bin\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.349981 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-node-log\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.349998 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-env-overrides\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350012 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-ovnkube-script-lib\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350028 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-log-socket\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350075 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-run-systemd\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350094 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-etc-openvswitch\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350131 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-host-slash\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350158 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-run-ovn\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350177 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-host-cni-netd\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350197 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-run-openvswitch\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350213 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-systemd-units\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350238 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-host-run-netns\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350319 4730 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-cni-netd\") on node \"crc\" DevicePath \"\"" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350332 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz64b\" (UniqueName: \"kubernetes.io/projected/c4b4e0e8-af33-491e-b1d1-31079d90c656-kube-api-access-mz64b\") on node \"crc\" DevicePath \"\"" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350344 4730 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-node-log\") on node \"crc\" DevicePath \"\"" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350355 4730 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-run-netns\") on node \"crc\" DevicePath \"\"" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350364 4730 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-systemd-units\") on node \"crc\" DevicePath \"\"" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350371 4730 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-run-systemd\") on node \"crc\" DevicePath \"\"" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350379 4730 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c4b4e0e8-af33-491e-b1d1-31079d90c656-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350389 4730 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-slash\") on node \"crc\" DevicePath \"\"" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350398 4730 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350406 4730 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-run-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350413 4730 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350421 4730 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c4b4e0e8-af33-491e-b1d1-31079d90c656-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350429 4730 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350436 4730 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-kubelet\") on node \"crc\" DevicePath \"\"" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350443 4730 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350452 4730 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c4b4e0e8-af33-491e-b1d1-31079d90c656-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350459 4730 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c4b4e0e8-af33-491e-b1d1-31079d90c656-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350466 4730 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-log-socket\") on node \"crc\" DevicePath \"\"" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.350474 4730 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c4b4e0e8-af33-491e-b1d1-31079d90c656-host-cni-bin\") on node \"crc\" DevicePath \"\"" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.452128 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-node-log\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.452195 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-env-overrides\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.452234 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-ovnkube-script-lib\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.452341 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-node-log\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.452344 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-log-socket\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.452420 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-log-socket\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.452459 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-run-systemd\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.452433 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-run-systemd\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.452510 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-etc-openvswitch\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.452601 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-host-slash\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.452650 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-run-ovn\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.452687 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-host-cni-netd\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.452724 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-run-openvswitch\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.452753 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-systemd-units\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.452784 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-host-run-netns\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.452844 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-ovn-node-metrics-cert\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.452894 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-host-kubelet\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.452958 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcbxp\" (UniqueName: \"kubernetes.io/projected/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-kube-api-access-gcbxp\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.453005 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.453049 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-var-lib-openvswitch\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.453083 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-ovnkube-config\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.453114 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-host-run-ovn-kubernetes\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.453144 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-host-cni-bin\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.453201 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-env-overrides\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.453237 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-host-cni-bin\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.453297 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-etc-openvswitch\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.453324 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-host-slash\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.453347 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-run-ovn\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.453369 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-host-cni-netd\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.453390 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-run-openvswitch\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.453416 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-systemd-units\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.453432 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-var-lib-openvswitch\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.453487 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.453491 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-ovnkube-script-lib\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.453506 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-host-run-ovn-kubernetes\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.453516 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-host-kubelet\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.453279 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-host-run-netns\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.453924 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-ovnkube-config\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.461179 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-ovn-node-metrics-cert\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.470462 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcbxp\" (UniqueName: \"kubernetes.io/projected/393a72ce-ab43-44d6-a484-dcc1ebe1d48e-kube-api-access-gcbxp\") pod \"ovnkube-node-flvb5\" (UID: \"393a72ce-ab43-44d6-a484-dcc1ebe1d48e\") " pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.600813 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:51:55 crc kubenswrapper[4730]: W0320 15:51:55.627033 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod393a72ce_ab43_44d6_a484_dcc1ebe1d48e.slice/crio-1f466238e3c2440ae3bdf44e2467841d891dda10a0371c505f812e48d302c35f WatchSource:0}: Error finding container 1f466238e3c2440ae3bdf44e2467841d891dda10a0371c505f812e48d302c35f: Status 404 returned error can't find the container with id 1f466238e3c2440ae3bdf44e2467841d891dda10a0371c505f812e48d302c35f Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.880191 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj97f_c4b4e0e8-af33-491e-b1d1-31079d90c656/ovnkube-controller/3.log" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.885335 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj97f_c4b4e0e8-af33-491e-b1d1-31079d90c656/ovn-acl-logging/0.log" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.885885 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qj97f_c4b4e0e8-af33-491e-b1d1-31079d90c656/ovn-controller/0.log" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.886300 4730 generic.go:334] "Generic (PLEG): container finished" podID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerID="35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971" exitCode=0 Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.887295 4730 generic.go:334] "Generic (PLEG): container finished" podID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerID="43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f" exitCode=0 Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.887444 4730 generic.go:334] "Generic (PLEG): container finished" podID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerID="d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01" exitCode=0 Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.887534 4730 generic.go:334] "Generic (PLEG): container finished" podID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerID="462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c" exitCode=0 Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.887590 4730 generic.go:334] "Generic (PLEG): container finished" podID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerID="006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d" exitCode=0 Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.887643 4730 generic.go:334] "Generic (PLEG): container finished" podID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerID="e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91" exitCode=0 Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.887694 4730 generic.go:334] "Generic (PLEG): container finished" podID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerID="b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db" exitCode=143 Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.887750 4730 generic.go:334] "Generic (PLEG): container finished" podID="c4b4e0e8-af33-491e-b1d1-31079d90c656" containerID="31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c" exitCode=143 Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.886390 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.886346 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" event={"ID":"c4b4e0e8-af33-491e-b1d1-31079d90c656","Type":"ContainerDied","Data":"35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888053 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" event={"ID":"c4b4e0e8-af33-491e-b1d1-31079d90c656","Type":"ContainerDied","Data":"43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888068 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" event={"ID":"c4b4e0e8-af33-491e-b1d1-31079d90c656","Type":"ContainerDied","Data":"d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888072 4730 scope.go:117] "RemoveContainer" containerID="35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888079 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" event={"ID":"c4b4e0e8-af33-491e-b1d1-31079d90c656","Type":"ContainerDied","Data":"462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888156 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" event={"ID":"c4b4e0e8-af33-491e-b1d1-31079d90c656","Type":"ContainerDied","Data":"006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888167 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" event={"ID":"c4b4e0e8-af33-491e-b1d1-31079d90c656","Type":"ContainerDied","Data":"e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888176 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888185 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888192 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888199 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888204 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888209 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888214 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888219 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888224 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888231 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" event={"ID":"c4b4e0e8-af33-491e-b1d1-31079d90c656","Type":"ContainerDied","Data":"b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888239 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888259 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888264 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888269 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888275 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888280 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888285 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888290 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888294 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888299 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888306 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" event={"ID":"c4b4e0e8-af33-491e-b1d1-31079d90c656","Type":"ContainerDied","Data":"31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888314 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888319 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888325 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888330 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888335 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888380 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888387 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888393 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888398 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888404 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888412 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qj97f" event={"ID":"c4b4e0e8-af33-491e-b1d1-31079d90c656","Type":"ContainerDied","Data":"f0bb8a04718d250ff389e424bacc9dc0320526af93827c03eb732b797d1a25fb"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888420 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888426 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888432 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888437 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888442 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888447 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888452 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888457 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888463 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.888468 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.889409 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6r2kn_6f97b1f1-1fad-44ec-8253-17dd6a5eee54/kube-multus/2.log" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.890103 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6r2kn_6f97b1f1-1fad-44ec-8253-17dd6a5eee54/kube-multus/1.log" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.890133 4730 generic.go:334] "Generic (PLEG): container finished" podID="6f97b1f1-1fad-44ec-8253-17dd6a5eee54" containerID="b07ba8437e9756f6cb976900c9db574ebb08c12f74c7cd2c86009c95fccf5b7e" exitCode=2 Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.890179 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6r2kn" event={"ID":"6f97b1f1-1fad-44ec-8253-17dd6a5eee54","Type":"ContainerDied","Data":"b07ba8437e9756f6cb976900c9db574ebb08c12f74c7cd2c86009c95fccf5b7e"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.890196 4730 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"12ba423ea0fecce8b2416cc8f75f3323980aae80a20ff26bd2f9a6c4cd464812"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.890636 4730 scope.go:117] "RemoveContainer" containerID="b07ba8437e9756f6cb976900c9db574ebb08c12f74c7cd2c86009c95fccf5b7e" Mar 20 15:51:55 crc kubenswrapper[4730]: E0320 15:51:55.890816 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-6r2kn_openshift-multus(6f97b1f1-1fad-44ec-8253-17dd6a5eee54)\"" pod="openshift-multus/multus-6r2kn" podUID="6f97b1f1-1fad-44ec-8253-17dd6a5eee54" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.892762 4730 generic.go:334] "Generic (PLEG): container finished" podID="393a72ce-ab43-44d6-a484-dcc1ebe1d48e" containerID="6e3523b44c23ee77011cede2dd0b960723c6800426b70b5ad879d54360b10210" exitCode=0 Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.892796 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" event={"ID":"393a72ce-ab43-44d6-a484-dcc1ebe1d48e","Type":"ContainerDied","Data":"6e3523b44c23ee77011cede2dd0b960723c6800426b70b5ad879d54360b10210"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.892818 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" event={"ID":"393a72ce-ab43-44d6-a484-dcc1ebe1d48e","Type":"ContainerStarted","Data":"1f466238e3c2440ae3bdf44e2467841d891dda10a0371c505f812e48d302c35f"} Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.904121 4730 scope.go:117] "RemoveContainer" containerID="7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.924451 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qj97f"] Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.930745 4730 scope.go:117] "RemoveContainer" containerID="43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.934420 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qj97f"] Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.955279 4730 scope.go:117] "RemoveContainer" containerID="d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.969999 4730 scope.go:117] "RemoveContainer" containerID="462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.984492 4730 scope.go:117] "RemoveContainer" containerID="006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d" Mar 20 15:51:55 crc kubenswrapper[4730]: I0320 15:51:55.995627 4730 scope.go:117] "RemoveContainer" containerID="e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.033220 4730 scope.go:117] "RemoveContainer" containerID="b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.046617 4730 scope.go:117] "RemoveContainer" containerID="31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.073161 4730 scope.go:117] "RemoveContainer" containerID="b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.088887 4730 scope.go:117] "RemoveContainer" containerID="35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971" Mar 20 15:51:56 crc kubenswrapper[4730]: E0320 15:51:56.089397 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971\": container with ID starting with 35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971 not found: ID does not exist" containerID="35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.089427 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971"} err="failed to get container status \"35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971\": rpc error: code = NotFound desc = could not find container \"35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971\": container with ID starting with 35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971 not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.089449 4730 scope.go:117] "RemoveContainer" containerID="7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed" Mar 20 15:51:56 crc kubenswrapper[4730]: E0320 15:51:56.089778 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed\": container with ID starting with 7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed not found: ID does not exist" containerID="7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.089800 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed"} err="failed to get container status \"7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed\": rpc error: code = NotFound desc = could not find container \"7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed\": container with ID starting with 7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.089813 4730 scope.go:117] "RemoveContainer" containerID="43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f" Mar 20 15:51:56 crc kubenswrapper[4730]: E0320 15:51:56.090080 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\": container with ID starting with 43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f not found: ID does not exist" containerID="43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.090101 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f"} err="failed to get container status \"43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\": rpc error: code = NotFound desc = could not find container \"43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\": container with ID starting with 43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.090115 4730 scope.go:117] "RemoveContainer" containerID="d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01" Mar 20 15:51:56 crc kubenswrapper[4730]: E0320 15:51:56.090579 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\": container with ID starting with d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01 not found: ID does not exist" containerID="d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.090636 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01"} err="failed to get container status \"d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\": rpc error: code = NotFound desc = could not find container \"d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\": container with ID starting with d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01 not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.090670 4730 scope.go:117] "RemoveContainer" containerID="462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c" Mar 20 15:51:56 crc kubenswrapper[4730]: E0320 15:51:56.091032 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\": container with ID starting with 462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c not found: ID does not exist" containerID="462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.091057 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c"} err="failed to get container status \"462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\": rpc error: code = NotFound desc = could not find container \"462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\": container with ID starting with 462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.091072 4730 scope.go:117] "RemoveContainer" containerID="006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d" Mar 20 15:51:56 crc kubenswrapper[4730]: E0320 15:51:56.091422 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\": container with ID starting with 006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d not found: ID does not exist" containerID="006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.091608 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d"} err="failed to get container status \"006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\": rpc error: code = NotFound desc = could not find container \"006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\": container with ID starting with 006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.091693 4730 scope.go:117] "RemoveContainer" containerID="e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91" Mar 20 15:51:56 crc kubenswrapper[4730]: E0320 15:51:56.092088 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\": container with ID starting with e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91 not found: ID does not exist" containerID="e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.092113 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91"} err="failed to get container status \"e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\": rpc error: code = NotFound desc = could not find container \"e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\": container with ID starting with e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91 not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.092132 4730 scope.go:117] "RemoveContainer" containerID="b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db" Mar 20 15:51:56 crc kubenswrapper[4730]: E0320 15:51:56.092501 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\": container with ID starting with b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db not found: ID does not exist" containerID="b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.092595 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db"} err="failed to get container status \"b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\": rpc error: code = NotFound desc = could not find container \"b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\": container with ID starting with b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.092657 4730 scope.go:117] "RemoveContainer" containerID="31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c" Mar 20 15:51:56 crc kubenswrapper[4730]: E0320 15:51:56.092995 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\": container with ID starting with 31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c not found: ID does not exist" containerID="31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.093021 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c"} err="failed to get container status \"31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\": rpc error: code = NotFound desc = could not find container \"31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\": container with ID starting with 31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.093036 4730 scope.go:117] "RemoveContainer" containerID="b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e" Mar 20 15:51:56 crc kubenswrapper[4730]: E0320 15:51:56.093294 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\": container with ID starting with b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e not found: ID does not exist" containerID="b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.093323 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e"} err="failed to get container status \"b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\": rpc error: code = NotFound desc = could not find container \"b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\": container with ID starting with b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.093343 4730 scope.go:117] "RemoveContainer" containerID="35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.093600 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971"} err="failed to get container status \"35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971\": rpc error: code = NotFound desc = could not find container \"35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971\": container with ID starting with 35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971 not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.093630 4730 scope.go:117] "RemoveContainer" containerID="7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.093963 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed"} err="failed to get container status \"7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed\": rpc error: code = NotFound desc = could not find container \"7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed\": container with ID starting with 7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.094046 4730 scope.go:117] "RemoveContainer" containerID="43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.094519 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f"} err="failed to get container status \"43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\": rpc error: code = NotFound desc = could not find container \"43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\": container with ID starting with 43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.094542 4730 scope.go:117] "RemoveContainer" containerID="d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.094964 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01"} err="failed to get container status \"d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\": rpc error: code = NotFound desc = could not find container \"d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\": container with ID starting with d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01 not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.094986 4730 scope.go:117] "RemoveContainer" containerID="462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.095200 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c"} err="failed to get container status \"462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\": rpc error: code = NotFound desc = could not find container \"462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\": container with ID starting with 462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.095226 4730 scope.go:117] "RemoveContainer" containerID="006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.095520 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d"} err="failed to get container status \"006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\": rpc error: code = NotFound desc = could not find container \"006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\": container with ID starting with 006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.095540 4730 scope.go:117] "RemoveContainer" containerID="e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.095882 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91"} err="failed to get container status \"e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\": rpc error: code = NotFound desc = could not find container \"e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\": container with ID starting with e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91 not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.095926 4730 scope.go:117] "RemoveContainer" containerID="b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.096191 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db"} err="failed to get container status \"b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\": rpc error: code = NotFound desc = could not find container \"b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\": container with ID starting with b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.096216 4730 scope.go:117] "RemoveContainer" containerID="31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.096479 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c"} err="failed to get container status \"31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\": rpc error: code = NotFound desc = could not find container \"31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\": container with ID starting with 31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.096500 4730 scope.go:117] "RemoveContainer" containerID="b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.096731 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e"} err="failed to get container status \"b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\": rpc error: code = NotFound desc = could not find container \"b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\": container with ID starting with b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.096757 4730 scope.go:117] "RemoveContainer" containerID="35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.096980 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971"} err="failed to get container status \"35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971\": rpc error: code = NotFound desc = could not find container \"35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971\": container with ID starting with 35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971 not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.097000 4730 scope.go:117] "RemoveContainer" containerID="7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.097268 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed"} err="failed to get container status \"7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed\": rpc error: code = NotFound desc = could not find container \"7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed\": container with ID starting with 7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.097306 4730 scope.go:117] "RemoveContainer" containerID="43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.097641 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f"} err="failed to get container status \"43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\": rpc error: code = NotFound desc = could not find container \"43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\": container with ID starting with 43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.097661 4730 scope.go:117] "RemoveContainer" containerID="d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.097865 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01"} err="failed to get container status \"d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\": rpc error: code = NotFound desc = could not find container \"d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\": container with ID starting with d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01 not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.097890 4730 scope.go:117] "RemoveContainer" containerID="462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.098198 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c"} err="failed to get container status \"462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\": rpc error: code = NotFound desc = could not find container \"462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\": container with ID starting with 462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.098241 4730 scope.go:117] "RemoveContainer" containerID="006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.098612 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d"} err="failed to get container status \"006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\": rpc error: code = NotFound desc = could not find container \"006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\": container with ID starting with 006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.098713 4730 scope.go:117] "RemoveContainer" containerID="e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.099120 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91"} err="failed to get container status \"e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\": rpc error: code = NotFound desc = could not find container \"e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\": container with ID starting with e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91 not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.099151 4730 scope.go:117] "RemoveContainer" containerID="b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.099388 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db"} err="failed to get container status \"b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\": rpc error: code = NotFound desc = could not find container \"b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\": container with ID starting with b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.099458 4730 scope.go:117] "RemoveContainer" containerID="31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.099721 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c"} err="failed to get container status \"31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\": rpc error: code = NotFound desc = could not find container \"31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\": container with ID starting with 31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.099747 4730 scope.go:117] "RemoveContainer" containerID="b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.099958 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e"} err="failed to get container status \"b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\": rpc error: code = NotFound desc = could not find container \"b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\": container with ID starting with b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.100033 4730 scope.go:117] "RemoveContainer" containerID="35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.100636 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971"} err="failed to get container status \"35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971\": rpc error: code = NotFound desc = could not find container \"35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971\": container with ID starting with 35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971 not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.100663 4730 scope.go:117] "RemoveContainer" containerID="7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.100898 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed"} err="failed to get container status \"7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed\": rpc error: code = NotFound desc = could not find container \"7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed\": container with ID starting with 7263ba407f8e4372a1f43899fa38bbfefcf8ad7bf2f07d5e4fc6f145dc922aed not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.100967 4730 scope.go:117] "RemoveContainer" containerID="43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.101261 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f"} err="failed to get container status \"43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\": rpc error: code = NotFound desc = could not find container \"43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f\": container with ID starting with 43c3986f6340071e63882af0ddb383761d6c4fda45c266b60036c3c9416c9b2f not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.101283 4730 scope.go:117] "RemoveContainer" containerID="d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.101570 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01"} err="failed to get container status \"d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\": rpc error: code = NotFound desc = could not find container \"d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01\": container with ID starting with d938120ad60db79c12e709e5133b6705b2f730747fecb69de814b1b6aaf73c01 not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.101642 4730 scope.go:117] "RemoveContainer" containerID="462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.102034 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c"} err="failed to get container status \"462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\": rpc error: code = NotFound desc = could not find container \"462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c\": container with ID starting with 462d6624fd06581ef72076b52be20160973f59b42767b150892bebc10ce3417c not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.102055 4730 scope.go:117] "RemoveContainer" containerID="006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.102325 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d"} err="failed to get container status \"006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\": rpc error: code = NotFound desc = could not find container \"006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d\": container with ID starting with 006f9193e64005554c87bfbbc1a18a400222f98aaade64f97c6f5d73a168c19d not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.102368 4730 scope.go:117] "RemoveContainer" containerID="e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.102659 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91"} err="failed to get container status \"e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\": rpc error: code = NotFound desc = could not find container \"e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91\": container with ID starting with e0cabf222a34e7bb1792d1523467402505bc7c98ce1d9d8dc436e30355438a91 not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.102696 4730 scope.go:117] "RemoveContainer" containerID="b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.102900 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db"} err="failed to get container status \"b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\": rpc error: code = NotFound desc = could not find container \"b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db\": container with ID starting with b4c48a7668275e645faa69d28f9a51bf63ee0491445dc055e75b697d764de7db not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.102919 4730 scope.go:117] "RemoveContainer" containerID="31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.103100 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c"} err="failed to get container status \"31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\": rpc error: code = NotFound desc = could not find container \"31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c\": container with ID starting with 31218fb8030abf13b865fafaf7dcfb8b6f589bbc6286dc181a90d1b106493f8c not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.103123 4730 scope.go:117] "RemoveContainer" containerID="b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.103332 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e"} err="failed to get container status \"b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\": rpc error: code = NotFound desc = could not find container \"b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e\": container with ID starting with b4f5c77e05a32be063074d70c18bc46f4c2029b85d34df11fb18c01efc962a4e not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.103352 4730 scope.go:117] "RemoveContainer" containerID="35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.103540 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971"} err="failed to get container status \"35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971\": rpc error: code = NotFound desc = could not find container \"35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971\": container with ID starting with 35f50ce915b43165e56de90b2b1949244c29bf63fd85bb0583a8a8a6988d8971 not found: ID does not exist" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.188180 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-qcz52" Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.904019 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" event={"ID":"393a72ce-ab43-44d6-a484-dcc1ebe1d48e","Type":"ContainerStarted","Data":"4f347eecb49e07026cb50a573ae9c5d4c53f7a4603237c65d54a0fdfd4858a44"} Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.904069 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" event={"ID":"393a72ce-ab43-44d6-a484-dcc1ebe1d48e","Type":"ContainerStarted","Data":"2ab08c4edbe9028a68f66af0a72993b98c675bde8ba4640f4a0b2b282b102670"} Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.904082 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" event={"ID":"393a72ce-ab43-44d6-a484-dcc1ebe1d48e","Type":"ContainerStarted","Data":"97d950c2bd1fbbd5113ee9e71caf8d3c07baa224f6adba510779486093e3446e"} Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.904097 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" event={"ID":"393a72ce-ab43-44d6-a484-dcc1ebe1d48e","Type":"ContainerStarted","Data":"768a32271ccd8aa0722cd68031532fb2b37937e35174a39bc4adae7f42ac2791"} Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.904112 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" event={"ID":"393a72ce-ab43-44d6-a484-dcc1ebe1d48e","Type":"ContainerStarted","Data":"477093231ce6aa0e8da323f4ec33551f38f030880e24fb80c9ebe379d61ac84e"} Mar 20 15:51:56 crc kubenswrapper[4730]: I0320 15:51:56.904126 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" event={"ID":"393a72ce-ab43-44d6-a484-dcc1ebe1d48e","Type":"ContainerStarted","Data":"3ac79110b694027c697e932f2d307bb2538e95dacd292559442a53d8b8abbf9e"} Mar 20 15:51:57 crc kubenswrapper[4730]: I0320 15:51:57.541354 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4b4e0e8-af33-491e-b1d1-31079d90c656" path="/var/lib/kubelet/pods/c4b4e0e8-af33-491e-b1d1-31079d90c656/volumes" Mar 20 15:51:58 crc kubenswrapper[4730]: I0320 15:51:58.921869 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" event={"ID":"393a72ce-ab43-44d6-a484-dcc1ebe1d48e","Type":"ContainerStarted","Data":"721c4967a7e85d772035ccd640b4f1e1162cf4dfc1a938e8efa29daa3dfb1191"} Mar 20 15:52:00 crc kubenswrapper[4730]: I0320 15:52:00.131235 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567032-mfvhl"] Mar 20 15:52:00 crc kubenswrapper[4730]: I0320 15:52:00.132967 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567032-mfvhl" Mar 20 15:52:00 crc kubenswrapper[4730]: I0320 15:52:00.135721 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 15:52:00 crc kubenswrapper[4730]: I0320 15:52:00.135834 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 15:52:00 crc kubenswrapper[4730]: I0320 15:52:00.136113 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 15:52:00 crc kubenswrapper[4730]: I0320 15:52:00.223578 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7twt2\" (UniqueName: \"kubernetes.io/projected/76deb34d-7c3d-4510-9b0a-ac56dcca047a-kube-api-access-7twt2\") pod \"auto-csr-approver-29567032-mfvhl\" (UID: \"76deb34d-7c3d-4510-9b0a-ac56dcca047a\") " pod="openshift-infra/auto-csr-approver-29567032-mfvhl" Mar 20 15:52:00 crc kubenswrapper[4730]: I0320 15:52:00.324798 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7twt2\" (UniqueName: \"kubernetes.io/projected/76deb34d-7c3d-4510-9b0a-ac56dcca047a-kube-api-access-7twt2\") pod \"auto-csr-approver-29567032-mfvhl\" (UID: \"76deb34d-7c3d-4510-9b0a-ac56dcca047a\") " pod="openshift-infra/auto-csr-approver-29567032-mfvhl" Mar 20 15:52:00 crc kubenswrapper[4730]: I0320 15:52:00.346312 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7twt2\" (UniqueName: \"kubernetes.io/projected/76deb34d-7c3d-4510-9b0a-ac56dcca047a-kube-api-access-7twt2\") pod \"auto-csr-approver-29567032-mfvhl\" (UID: \"76deb34d-7c3d-4510-9b0a-ac56dcca047a\") " pod="openshift-infra/auto-csr-approver-29567032-mfvhl" Mar 20 15:52:00 crc kubenswrapper[4730]: I0320 15:52:00.446439 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567032-mfvhl" Mar 20 15:52:00 crc kubenswrapper[4730]: E0320 15:52:00.474328 4730 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29567032-mfvhl_openshift-infra_76deb34d-7c3d-4510-9b0a-ac56dcca047a_0(1e43ef10257d0b2bec80ba1eda076867fcaea9dd7cdf1804d016930de389ed6d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 15:52:00 crc kubenswrapper[4730]: E0320 15:52:00.474718 4730 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29567032-mfvhl_openshift-infra_76deb34d-7c3d-4510-9b0a-ac56dcca047a_0(1e43ef10257d0b2bec80ba1eda076867fcaea9dd7cdf1804d016930de389ed6d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29567032-mfvhl" Mar 20 15:52:00 crc kubenswrapper[4730]: E0320 15:52:00.474751 4730 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29567032-mfvhl_openshift-infra_76deb34d-7c3d-4510-9b0a-ac56dcca047a_0(1e43ef10257d0b2bec80ba1eda076867fcaea9dd7cdf1804d016930de389ed6d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29567032-mfvhl" Mar 20 15:52:00 crc kubenswrapper[4730]: E0320 15:52:00.474834 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"auto-csr-approver-29567032-mfvhl_openshift-infra(76deb34d-7c3d-4510-9b0a-ac56dcca047a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"auto-csr-approver-29567032-mfvhl_openshift-infra(76deb34d-7c3d-4510-9b0a-ac56dcca047a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29567032-mfvhl_openshift-infra_76deb34d-7c3d-4510-9b0a-ac56dcca047a_0(1e43ef10257d0b2bec80ba1eda076867fcaea9dd7cdf1804d016930de389ed6d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-infra/auto-csr-approver-29567032-mfvhl" podUID="76deb34d-7c3d-4510-9b0a-ac56dcca047a" Mar 20 15:52:01 crc kubenswrapper[4730]: I0320 15:52:01.828012 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567032-mfvhl"] Mar 20 15:52:01 crc kubenswrapper[4730]: I0320 15:52:01.828126 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567032-mfvhl" Mar 20 15:52:01 crc kubenswrapper[4730]: I0320 15:52:01.828496 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567032-mfvhl" Mar 20 15:52:01 crc kubenswrapper[4730]: E0320 15:52:01.875863 4730 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29567032-mfvhl_openshift-infra_76deb34d-7c3d-4510-9b0a-ac56dcca047a_0(ff8e835d53c597d8a6f74ecb7d426d47a474bfb647a1d54cdda58e584ec73301): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 15:52:01 crc kubenswrapper[4730]: E0320 15:52:01.875925 4730 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29567032-mfvhl_openshift-infra_76deb34d-7c3d-4510-9b0a-ac56dcca047a_0(ff8e835d53c597d8a6f74ecb7d426d47a474bfb647a1d54cdda58e584ec73301): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29567032-mfvhl" Mar 20 15:52:01 crc kubenswrapper[4730]: E0320 15:52:01.875946 4730 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29567032-mfvhl_openshift-infra_76deb34d-7c3d-4510-9b0a-ac56dcca047a_0(ff8e835d53c597d8a6f74ecb7d426d47a474bfb647a1d54cdda58e584ec73301): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29567032-mfvhl" Mar 20 15:52:01 crc kubenswrapper[4730]: E0320 15:52:01.875989 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"auto-csr-approver-29567032-mfvhl_openshift-infra(76deb34d-7c3d-4510-9b0a-ac56dcca047a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"auto-csr-approver-29567032-mfvhl_openshift-infra(76deb34d-7c3d-4510-9b0a-ac56dcca047a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29567032-mfvhl_openshift-infra_76deb34d-7c3d-4510-9b0a-ac56dcca047a_0(ff8e835d53c597d8a6f74ecb7d426d47a474bfb647a1d54cdda58e584ec73301): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-infra/auto-csr-approver-29567032-mfvhl" podUID="76deb34d-7c3d-4510-9b0a-ac56dcca047a" Mar 20 15:52:01 crc kubenswrapper[4730]: I0320 15:52:01.940875 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" event={"ID":"393a72ce-ab43-44d6-a484-dcc1ebe1d48e","Type":"ContainerStarted","Data":"4e4212c062016e6ed95d31e2a1a1ed5789ecd5f6e3b2eaf969b29780913865ae"} Mar 20 15:52:01 crc kubenswrapper[4730]: I0320 15:52:01.941101 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:52:01 crc kubenswrapper[4730]: I0320 15:52:01.941273 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:52:01 crc kubenswrapper[4730]: I0320 15:52:01.941318 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:52:01 crc kubenswrapper[4730]: I0320 15:52:01.966574 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:52:01 crc kubenswrapper[4730]: I0320 15:52:01.968783 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:52:01 crc kubenswrapper[4730]: I0320 15:52:01.976818 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" podStartSLOduration=6.976800493 podStartE2EDuration="6.976800493s" podCreationTimestamp="2026-03-20 15:51:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:52:01.971713519 +0000 UTC m=+781.185084898" watchObservedRunningTime="2026-03-20 15:52:01.976800493 +0000 UTC m=+781.190171872" Mar 20 15:52:10 crc kubenswrapper[4730]: I0320 15:52:10.533580 4730 scope.go:117] "RemoveContainer" containerID="b07ba8437e9756f6cb976900c9db574ebb08c12f74c7cd2c86009c95fccf5b7e" Mar 20 15:52:10 crc kubenswrapper[4730]: E0320 15:52:10.534773 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-6r2kn_openshift-multus(6f97b1f1-1fad-44ec-8253-17dd6a5eee54)\"" pod="openshift-multus/multus-6r2kn" podUID="6f97b1f1-1fad-44ec-8253-17dd6a5eee54" Mar 20 15:52:14 crc kubenswrapper[4730]: I0320 15:52:14.533218 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567032-mfvhl" Mar 20 15:52:14 crc kubenswrapper[4730]: I0320 15:52:14.534339 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567032-mfvhl" Mar 20 15:52:14 crc kubenswrapper[4730]: E0320 15:52:14.569918 4730 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29567032-mfvhl_openshift-infra_76deb34d-7c3d-4510-9b0a-ac56dcca047a_0(82a14b4a8cd3a44a3643adb049ebc35ea750447c7fd00fbcf7de17d6022499dc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 15:52:14 crc kubenswrapper[4730]: E0320 15:52:14.570010 4730 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29567032-mfvhl_openshift-infra_76deb34d-7c3d-4510-9b0a-ac56dcca047a_0(82a14b4a8cd3a44a3643adb049ebc35ea750447c7fd00fbcf7de17d6022499dc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29567032-mfvhl" Mar 20 15:52:14 crc kubenswrapper[4730]: E0320 15:52:14.570047 4730 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29567032-mfvhl_openshift-infra_76deb34d-7c3d-4510-9b0a-ac56dcca047a_0(82a14b4a8cd3a44a3643adb049ebc35ea750447c7fd00fbcf7de17d6022499dc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29567032-mfvhl" Mar 20 15:52:14 crc kubenswrapper[4730]: E0320 15:52:14.570122 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"auto-csr-approver-29567032-mfvhl_openshift-infra(76deb34d-7c3d-4510-9b0a-ac56dcca047a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"auto-csr-approver-29567032-mfvhl_openshift-infra(76deb34d-7c3d-4510-9b0a-ac56dcca047a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29567032-mfvhl_openshift-infra_76deb34d-7c3d-4510-9b0a-ac56dcca047a_0(82a14b4a8cd3a44a3643adb049ebc35ea750447c7fd00fbcf7de17d6022499dc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-infra/auto-csr-approver-29567032-mfvhl" podUID="76deb34d-7c3d-4510-9b0a-ac56dcca047a" Mar 20 15:52:22 crc kubenswrapper[4730]: I0320 15:52:22.005937 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv"] Mar 20 15:52:22 crc kubenswrapper[4730]: I0320 15:52:22.009336 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" Mar 20 15:52:22 crc kubenswrapper[4730]: I0320 15:52:22.012394 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 20 15:52:22 crc kubenswrapper[4730]: I0320 15:52:22.019600 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv"] Mar 20 15:52:22 crc kubenswrapper[4730]: I0320 15:52:22.039323 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/25d50abe-8eeb-4761-83b7-d9e7fbb78a76-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv\" (UID: \"25d50abe-8eeb-4761-83b7-d9e7fbb78a76\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" Mar 20 15:52:22 crc kubenswrapper[4730]: I0320 15:52:22.039388 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/25d50abe-8eeb-4761-83b7-d9e7fbb78a76-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv\" (UID: \"25d50abe-8eeb-4761-83b7-d9e7fbb78a76\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" Mar 20 15:52:22 crc kubenswrapper[4730]: I0320 15:52:22.039710 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zls7g\" (UniqueName: \"kubernetes.io/projected/25d50abe-8eeb-4761-83b7-d9e7fbb78a76-kube-api-access-zls7g\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv\" (UID: \"25d50abe-8eeb-4761-83b7-d9e7fbb78a76\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" Mar 20 15:52:22 crc kubenswrapper[4730]: I0320 15:52:22.140180 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/25d50abe-8eeb-4761-83b7-d9e7fbb78a76-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv\" (UID: \"25d50abe-8eeb-4761-83b7-d9e7fbb78a76\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" Mar 20 15:52:22 crc kubenswrapper[4730]: I0320 15:52:22.140301 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zls7g\" (UniqueName: \"kubernetes.io/projected/25d50abe-8eeb-4761-83b7-d9e7fbb78a76-kube-api-access-zls7g\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv\" (UID: \"25d50abe-8eeb-4761-83b7-d9e7fbb78a76\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" Mar 20 15:52:22 crc kubenswrapper[4730]: I0320 15:52:22.140345 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/25d50abe-8eeb-4761-83b7-d9e7fbb78a76-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv\" (UID: \"25d50abe-8eeb-4761-83b7-d9e7fbb78a76\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" Mar 20 15:52:22 crc kubenswrapper[4730]: I0320 15:52:22.140794 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/25d50abe-8eeb-4761-83b7-d9e7fbb78a76-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv\" (UID: \"25d50abe-8eeb-4761-83b7-d9e7fbb78a76\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" Mar 20 15:52:22 crc kubenswrapper[4730]: I0320 15:52:22.140808 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/25d50abe-8eeb-4761-83b7-d9e7fbb78a76-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv\" (UID: \"25d50abe-8eeb-4761-83b7-d9e7fbb78a76\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" Mar 20 15:52:22 crc kubenswrapper[4730]: I0320 15:52:22.157746 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zls7g\" (UniqueName: \"kubernetes.io/projected/25d50abe-8eeb-4761-83b7-d9e7fbb78a76-kube-api-access-zls7g\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv\" (UID: \"25d50abe-8eeb-4761-83b7-d9e7fbb78a76\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" Mar 20 15:52:22 crc kubenswrapper[4730]: I0320 15:52:22.333429 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" Mar 20 15:52:22 crc kubenswrapper[4730]: E0320 15:52:22.366721 4730 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv_openshift-marketplace_25d50abe-8eeb-4761-83b7-d9e7fbb78a76_0(d912d4bf6d2f446fec5eb826fae2313378f1d34dfd35d7dacff120a1355be493): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 15:52:22 crc kubenswrapper[4730]: E0320 15:52:22.366799 4730 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv_openshift-marketplace_25d50abe-8eeb-4761-83b7-d9e7fbb78a76_0(d912d4bf6d2f446fec5eb826fae2313378f1d34dfd35d7dacff120a1355be493): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" Mar 20 15:52:22 crc kubenswrapper[4730]: E0320 15:52:22.366828 4730 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv_openshift-marketplace_25d50abe-8eeb-4761-83b7-d9e7fbb78a76_0(d912d4bf6d2f446fec5eb826fae2313378f1d34dfd35d7dacff120a1355be493): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" Mar 20 15:52:22 crc kubenswrapper[4730]: E0320 15:52:22.366898 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv_openshift-marketplace(25d50abe-8eeb-4761-83b7-d9e7fbb78a76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv_openshift-marketplace(25d50abe-8eeb-4761-83b7-d9e7fbb78a76)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv_openshift-marketplace_25d50abe-8eeb-4761-83b7-d9e7fbb78a76_0(d912d4bf6d2f446fec5eb826fae2313378f1d34dfd35d7dacff120a1355be493): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" podUID="25d50abe-8eeb-4761-83b7-d9e7fbb78a76" Mar 20 15:52:22 crc kubenswrapper[4730]: I0320 15:52:22.533725 4730 scope.go:117] "RemoveContainer" containerID="b07ba8437e9756f6cb976900c9db574ebb08c12f74c7cd2c86009c95fccf5b7e" Mar 20 15:52:23 crc kubenswrapper[4730]: I0320 15:52:23.090154 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6r2kn_6f97b1f1-1fad-44ec-8253-17dd6a5eee54/kube-multus/2.log" Mar 20 15:52:23 crc kubenswrapper[4730]: I0320 15:52:23.091066 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6r2kn_6f97b1f1-1fad-44ec-8253-17dd6a5eee54/kube-multus/1.log" Mar 20 15:52:23 crc kubenswrapper[4730]: I0320 15:52:23.091184 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" Mar 20 15:52:23 crc kubenswrapper[4730]: I0320 15:52:23.091190 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6r2kn" event={"ID":"6f97b1f1-1fad-44ec-8253-17dd6a5eee54","Type":"ContainerStarted","Data":"242e721c0e4a2c44f93a6e9eb81955d21f775c8c5592f6a79b8fff79bb41b348"} Mar 20 15:52:23 crc kubenswrapper[4730]: I0320 15:52:23.091708 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" Mar 20 15:52:23 crc kubenswrapper[4730]: E0320 15:52:23.117509 4730 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv_openshift-marketplace_25d50abe-8eeb-4761-83b7-d9e7fbb78a76_0(7e370178d2cdf7fe20f426bf26f3d266dfb432fd522d96c37432b06f044fd2e5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 15:52:23 crc kubenswrapper[4730]: E0320 15:52:23.117627 4730 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv_openshift-marketplace_25d50abe-8eeb-4761-83b7-d9e7fbb78a76_0(7e370178d2cdf7fe20f426bf26f3d266dfb432fd522d96c37432b06f044fd2e5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" Mar 20 15:52:23 crc kubenswrapper[4730]: E0320 15:52:23.117666 4730 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv_openshift-marketplace_25d50abe-8eeb-4761-83b7-d9e7fbb78a76_0(7e370178d2cdf7fe20f426bf26f3d266dfb432fd522d96c37432b06f044fd2e5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" Mar 20 15:52:23 crc kubenswrapper[4730]: E0320 15:52:23.117749 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv_openshift-marketplace(25d50abe-8eeb-4761-83b7-d9e7fbb78a76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv_openshift-marketplace(25d50abe-8eeb-4761-83b7-d9e7fbb78a76)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv_openshift-marketplace_25d50abe-8eeb-4761-83b7-d9e7fbb78a76_0(7e370178d2cdf7fe20f426bf26f3d266dfb432fd522d96c37432b06f044fd2e5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" podUID="25d50abe-8eeb-4761-83b7-d9e7fbb78a76" Mar 20 15:52:25 crc kubenswrapper[4730]: I0320 15:52:25.621776 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-flvb5" Mar 20 15:52:27 crc kubenswrapper[4730]: I0320 15:52:27.532232 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567032-mfvhl" Mar 20 15:52:27 crc kubenswrapper[4730]: I0320 15:52:27.532664 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567032-mfvhl" Mar 20 15:52:27 crc kubenswrapper[4730]: I0320 15:52:27.735657 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567032-mfvhl"] Mar 20 15:52:28 crc kubenswrapper[4730]: I0320 15:52:28.120282 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567032-mfvhl" event={"ID":"76deb34d-7c3d-4510-9b0a-ac56dcca047a","Type":"ContainerStarted","Data":"ee01fb471a06970161acecb5413bd9bdf6dd1361a6f3d424489465099a9f3c85"} Mar 20 15:52:28 crc kubenswrapper[4730]: I0320 15:52:28.304544 4730 scope.go:117] "RemoveContainer" containerID="12ba423ea0fecce8b2416cc8f75f3323980aae80a20ff26bd2f9a6c4cd464812" Mar 20 15:52:29 crc kubenswrapper[4730]: I0320 15:52:29.130074 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6r2kn_6f97b1f1-1fad-44ec-8253-17dd6a5eee54/kube-multus/2.log" Mar 20 15:52:34 crc kubenswrapper[4730]: I0320 15:52:34.163031 4730 generic.go:334] "Generic (PLEG): container finished" podID="76deb34d-7c3d-4510-9b0a-ac56dcca047a" containerID="3950e99a8167c1c32630e01067078d701c75fdf49d8f8666a31a81f7a02ba1d9" exitCode=0 Mar 20 15:52:34 crc kubenswrapper[4730]: I0320 15:52:34.163147 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567032-mfvhl" event={"ID":"76deb34d-7c3d-4510-9b0a-ac56dcca047a","Type":"ContainerDied","Data":"3950e99a8167c1c32630e01067078d701c75fdf49d8f8666a31a81f7a02ba1d9"} Mar 20 15:52:35 crc kubenswrapper[4730]: I0320 15:52:35.416030 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567032-mfvhl" Mar 20 15:52:35 crc kubenswrapper[4730]: I0320 15:52:35.514621 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7twt2\" (UniqueName: \"kubernetes.io/projected/76deb34d-7c3d-4510-9b0a-ac56dcca047a-kube-api-access-7twt2\") pod \"76deb34d-7c3d-4510-9b0a-ac56dcca047a\" (UID: \"76deb34d-7c3d-4510-9b0a-ac56dcca047a\") " Mar 20 15:52:35 crc kubenswrapper[4730]: I0320 15:52:35.521325 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76deb34d-7c3d-4510-9b0a-ac56dcca047a-kube-api-access-7twt2" (OuterVolumeSpecName: "kube-api-access-7twt2") pod "76deb34d-7c3d-4510-9b0a-ac56dcca047a" (UID: "76deb34d-7c3d-4510-9b0a-ac56dcca047a"). InnerVolumeSpecName "kube-api-access-7twt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:52:35 crc kubenswrapper[4730]: I0320 15:52:35.532601 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" Mar 20 15:52:35 crc kubenswrapper[4730]: I0320 15:52:35.533266 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" Mar 20 15:52:35 crc kubenswrapper[4730]: I0320 15:52:35.616332 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7twt2\" (UniqueName: \"kubernetes.io/projected/76deb34d-7c3d-4510-9b0a-ac56dcca047a-kube-api-access-7twt2\") on node \"crc\" DevicePath \"\"" Mar 20 15:52:35 crc kubenswrapper[4730]: I0320 15:52:35.956483 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv"] Mar 20 15:52:35 crc kubenswrapper[4730]: W0320 15:52:35.958061 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25d50abe_8eeb_4761_83b7_d9e7fbb78a76.slice/crio-fdd701b9554689bd678971a8747cd36abc6d9ba91fdad5521ca11546c733c133 WatchSource:0}: Error finding container fdd701b9554689bd678971a8747cd36abc6d9ba91fdad5521ca11546c733c133: Status 404 returned error can't find the container with id fdd701b9554689bd678971a8747cd36abc6d9ba91fdad5521ca11546c733c133 Mar 20 15:52:36 crc kubenswrapper[4730]: I0320 15:52:36.177314 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" event={"ID":"25d50abe-8eeb-4761-83b7-d9e7fbb78a76","Type":"ContainerStarted","Data":"5da831e15da1bf9d6a82541d2c31ca02cc9597c72f920b0b4ecf41d73364d6da"} Mar 20 15:52:36 crc kubenswrapper[4730]: I0320 15:52:36.177366 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" event={"ID":"25d50abe-8eeb-4761-83b7-d9e7fbb78a76","Type":"ContainerStarted","Data":"fdd701b9554689bd678971a8747cd36abc6d9ba91fdad5521ca11546c733c133"} Mar 20 15:52:36 crc kubenswrapper[4730]: I0320 15:52:36.179103 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567032-mfvhl" event={"ID":"76deb34d-7c3d-4510-9b0a-ac56dcca047a","Type":"ContainerDied","Data":"ee01fb471a06970161acecb5413bd9bdf6dd1361a6f3d424489465099a9f3c85"} Mar 20 15:52:36 crc kubenswrapper[4730]: I0320 15:52:36.179125 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee01fb471a06970161acecb5413bd9bdf6dd1361a6f3d424489465099a9f3c85" Mar 20 15:52:36 crc kubenswrapper[4730]: I0320 15:52:36.179177 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567032-mfvhl" Mar 20 15:52:36 crc kubenswrapper[4730]: I0320 15:52:36.475218 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567026-x7pgl"] Mar 20 15:52:36 crc kubenswrapper[4730]: I0320 15:52:36.479957 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567026-x7pgl"] Mar 20 15:52:37 crc kubenswrapper[4730]: I0320 15:52:37.188628 4730 generic.go:334] "Generic (PLEG): container finished" podID="25d50abe-8eeb-4761-83b7-d9e7fbb78a76" containerID="5da831e15da1bf9d6a82541d2c31ca02cc9597c72f920b0b4ecf41d73364d6da" exitCode=0 Mar 20 15:52:37 crc kubenswrapper[4730]: I0320 15:52:37.188693 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" event={"ID":"25d50abe-8eeb-4761-83b7-d9e7fbb78a76","Type":"ContainerDied","Data":"5da831e15da1bf9d6a82541d2c31ca02cc9597c72f920b0b4ecf41d73364d6da"} Mar 20 15:52:37 crc kubenswrapper[4730]: I0320 15:52:37.546364 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2acd72a0-988c-4c58-a7b4-c139ee0f6ef1" path="/var/lib/kubelet/pods/2acd72a0-988c-4c58-a7b4-c139ee0f6ef1/volumes" Mar 20 15:52:39 crc kubenswrapper[4730]: I0320 15:52:39.204576 4730 generic.go:334] "Generic (PLEG): container finished" podID="25d50abe-8eeb-4761-83b7-d9e7fbb78a76" containerID="e2b1cec905a13e9cee0c6111466f9589c7922fe5557b9c7783d948bda532c401" exitCode=0 Mar 20 15:52:39 crc kubenswrapper[4730]: I0320 15:52:39.204628 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" event={"ID":"25d50abe-8eeb-4761-83b7-d9e7fbb78a76","Type":"ContainerDied","Data":"e2b1cec905a13e9cee0c6111466f9589c7922fe5557b9c7783d948bda532c401"} Mar 20 15:52:40 crc kubenswrapper[4730]: I0320 15:52:40.214097 4730 generic.go:334] "Generic (PLEG): container finished" podID="25d50abe-8eeb-4761-83b7-d9e7fbb78a76" containerID="ee7c77ecaf8a998b304c004face78076e67a8e4d2d4e920833aeb1d421d33584" exitCode=0 Mar 20 15:52:40 crc kubenswrapper[4730]: I0320 15:52:40.214469 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" event={"ID":"25d50abe-8eeb-4761-83b7-d9e7fbb78a76","Type":"ContainerDied","Data":"ee7c77ecaf8a998b304c004face78076e67a8e4d2d4e920833aeb1d421d33584"} Mar 20 15:52:41 crc kubenswrapper[4730]: I0320 15:52:41.500017 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" Mar 20 15:52:41 crc kubenswrapper[4730]: I0320 15:52:41.595006 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/25d50abe-8eeb-4761-83b7-d9e7fbb78a76-bundle\") pod \"25d50abe-8eeb-4761-83b7-d9e7fbb78a76\" (UID: \"25d50abe-8eeb-4761-83b7-d9e7fbb78a76\") " Mar 20 15:52:41 crc kubenswrapper[4730]: I0320 15:52:41.595127 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/25d50abe-8eeb-4761-83b7-d9e7fbb78a76-util\") pod \"25d50abe-8eeb-4761-83b7-d9e7fbb78a76\" (UID: \"25d50abe-8eeb-4761-83b7-d9e7fbb78a76\") " Mar 20 15:52:41 crc kubenswrapper[4730]: I0320 15:52:41.595364 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zls7g\" (UniqueName: \"kubernetes.io/projected/25d50abe-8eeb-4761-83b7-d9e7fbb78a76-kube-api-access-zls7g\") pod \"25d50abe-8eeb-4761-83b7-d9e7fbb78a76\" (UID: \"25d50abe-8eeb-4761-83b7-d9e7fbb78a76\") " Mar 20 15:52:41 crc kubenswrapper[4730]: I0320 15:52:41.598623 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25d50abe-8eeb-4761-83b7-d9e7fbb78a76-bundle" (OuterVolumeSpecName: "bundle") pod "25d50abe-8eeb-4761-83b7-d9e7fbb78a76" (UID: "25d50abe-8eeb-4761-83b7-d9e7fbb78a76"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:52:41 crc kubenswrapper[4730]: I0320 15:52:41.602057 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25d50abe-8eeb-4761-83b7-d9e7fbb78a76-kube-api-access-zls7g" (OuterVolumeSpecName: "kube-api-access-zls7g") pod "25d50abe-8eeb-4761-83b7-d9e7fbb78a76" (UID: "25d50abe-8eeb-4761-83b7-d9e7fbb78a76"). InnerVolumeSpecName "kube-api-access-zls7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:52:41 crc kubenswrapper[4730]: I0320 15:52:41.697436 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zls7g\" (UniqueName: \"kubernetes.io/projected/25d50abe-8eeb-4761-83b7-d9e7fbb78a76-kube-api-access-zls7g\") on node \"crc\" DevicePath \"\"" Mar 20 15:52:41 crc kubenswrapper[4730]: I0320 15:52:41.697477 4730 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/25d50abe-8eeb-4761-83b7-d9e7fbb78a76-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 15:52:41 crc kubenswrapper[4730]: I0320 15:52:41.951449 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25d50abe-8eeb-4761-83b7-d9e7fbb78a76-util" (OuterVolumeSpecName: "util") pod "25d50abe-8eeb-4761-83b7-d9e7fbb78a76" (UID: "25d50abe-8eeb-4761-83b7-d9e7fbb78a76"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:52:42 crc kubenswrapper[4730]: I0320 15:52:42.001549 4730 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/25d50abe-8eeb-4761-83b7-d9e7fbb78a76-util\") on node \"crc\" DevicePath \"\"" Mar 20 15:52:42 crc kubenswrapper[4730]: I0320 15:52:42.230574 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" event={"ID":"25d50abe-8eeb-4761-83b7-d9e7fbb78a76","Type":"ContainerDied","Data":"fdd701b9554689bd678971a8747cd36abc6d9ba91fdad5521ca11546c733c133"} Mar 20 15:52:42 crc kubenswrapper[4730]: I0320 15:52:42.230631 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdd701b9554689bd678971a8747cd36abc6d9ba91fdad5521ca11546c733c133" Mar 20 15:52:42 crc kubenswrapper[4730]: I0320 15:52:42.230756 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv" Mar 20 15:52:50 crc kubenswrapper[4730]: I0320 15:52:50.819029 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-w67vt"] Mar 20 15:52:50 crc kubenswrapper[4730]: E0320 15:52:50.819868 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25d50abe-8eeb-4761-83b7-d9e7fbb78a76" containerName="pull" Mar 20 15:52:50 crc kubenswrapper[4730]: I0320 15:52:50.819884 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="25d50abe-8eeb-4761-83b7-d9e7fbb78a76" containerName="pull" Mar 20 15:52:50 crc kubenswrapper[4730]: E0320 15:52:50.819897 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25d50abe-8eeb-4761-83b7-d9e7fbb78a76" containerName="util" Mar 20 15:52:50 crc kubenswrapper[4730]: I0320 15:52:50.819905 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="25d50abe-8eeb-4761-83b7-d9e7fbb78a76" containerName="util" Mar 20 15:52:50 crc kubenswrapper[4730]: E0320 15:52:50.819929 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76deb34d-7c3d-4510-9b0a-ac56dcca047a" containerName="oc" Mar 20 15:52:50 crc kubenswrapper[4730]: I0320 15:52:50.819939 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="76deb34d-7c3d-4510-9b0a-ac56dcca047a" containerName="oc" Mar 20 15:52:50 crc kubenswrapper[4730]: E0320 15:52:50.819949 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25d50abe-8eeb-4761-83b7-d9e7fbb78a76" containerName="extract" Mar 20 15:52:50 crc kubenswrapper[4730]: I0320 15:52:50.819957 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="25d50abe-8eeb-4761-83b7-d9e7fbb78a76" containerName="extract" Mar 20 15:52:50 crc kubenswrapper[4730]: I0320 15:52:50.820079 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="25d50abe-8eeb-4761-83b7-d9e7fbb78a76" containerName="extract" Mar 20 15:52:50 crc kubenswrapper[4730]: I0320 15:52:50.820094 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="76deb34d-7c3d-4510-9b0a-ac56dcca047a" containerName="oc" Mar 20 15:52:50 crc kubenswrapper[4730]: I0320 15:52:50.820562 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-w67vt" Mar 20 15:52:50 crc kubenswrapper[4730]: I0320 15:52:50.848021 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-94qns" Mar 20 15:52:50 crc kubenswrapper[4730]: I0320 15:52:50.848232 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 20 15:52:50 crc kubenswrapper[4730]: I0320 15:52:50.849050 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 20 15:52:50 crc kubenswrapper[4730]: I0320 15:52:50.855752 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-w67vt"] Mar 20 15:52:50 crc kubenswrapper[4730]: I0320 15:52:50.914214 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fvnp\" (UniqueName: \"kubernetes.io/projected/5db89423-34f0-46c3-9dcf-2179c6c6f42a-kube-api-access-9fvnp\") pod \"obo-prometheus-operator-8ff7d675-w67vt\" (UID: \"5db89423-34f0-46c3-9dcf-2179c6c6f42a\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-w67vt" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.014962 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fvnp\" (UniqueName: \"kubernetes.io/projected/5db89423-34f0-46c3-9dcf-2179c6c6f42a-kube-api-access-9fvnp\") pod \"obo-prometheus-operator-8ff7d675-w67vt\" (UID: \"5db89423-34f0-46c3-9dcf-2179c6c6f42a\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-w67vt" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.046136 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fvnp\" (UniqueName: \"kubernetes.io/projected/5db89423-34f0-46c3-9dcf-2179c6c6f42a-kube-api-access-9fvnp\") pod \"obo-prometheus-operator-8ff7d675-w67vt\" (UID: \"5db89423-34f0-46c3-9dcf-2179c6c6f42a\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-w67vt" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.137894 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-w67vt" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.241776 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-85c477fd8f-d52bp"] Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.242749 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c477fd8f-d52bp" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.245630 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.246141 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-msdtd" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.268300 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-85c477fd8f-d52bp"] Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.274382 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-85c477fd8f-7ttsl"] Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.275236 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c477fd8f-7ttsl" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.283074 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-85c477fd8f-7ttsl"] Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.317805 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c12d2a2b-f7db-41be-89e1-97869c8119c2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-85c477fd8f-d52bp\" (UID: \"c12d2a2b-f7db-41be-89e1-97869c8119c2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c477fd8f-d52bp" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.317926 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c12d2a2b-f7db-41be-89e1-97869c8119c2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-85c477fd8f-d52bp\" (UID: \"c12d2a2b-f7db-41be-89e1-97869c8119c2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c477fd8f-d52bp" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.409971 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-w67vt"] Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.429849 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c12d2a2b-f7db-41be-89e1-97869c8119c2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-85c477fd8f-d52bp\" (UID: \"c12d2a2b-f7db-41be-89e1-97869c8119c2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c477fd8f-d52bp" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.429947 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7520ba92-5020-48d1-8d1c-fa20f0f407be-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-85c477fd8f-7ttsl\" (UID: \"7520ba92-5020-48d1-8d1c-fa20f0f407be\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c477fd8f-7ttsl" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.429986 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7520ba92-5020-48d1-8d1c-fa20f0f407be-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-85c477fd8f-7ttsl\" (UID: \"7520ba92-5020-48d1-8d1c-fa20f0f407be\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c477fd8f-7ttsl" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.430027 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c12d2a2b-f7db-41be-89e1-97869c8119c2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-85c477fd8f-d52bp\" (UID: \"c12d2a2b-f7db-41be-89e1-97869c8119c2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c477fd8f-d52bp" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.437237 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c12d2a2b-f7db-41be-89e1-97869c8119c2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-85c477fd8f-d52bp\" (UID: \"c12d2a2b-f7db-41be-89e1-97869c8119c2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c477fd8f-d52bp" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.438284 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c12d2a2b-f7db-41be-89e1-97869c8119c2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-85c477fd8f-d52bp\" (UID: \"c12d2a2b-f7db-41be-89e1-97869c8119c2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c477fd8f-d52bp" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.531074 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7520ba92-5020-48d1-8d1c-fa20f0f407be-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-85c477fd8f-7ttsl\" (UID: \"7520ba92-5020-48d1-8d1c-fa20f0f407be\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c477fd8f-7ttsl" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.531438 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7520ba92-5020-48d1-8d1c-fa20f0f407be-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-85c477fd8f-7ttsl\" (UID: \"7520ba92-5020-48d1-8d1c-fa20f0f407be\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c477fd8f-7ttsl" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.534464 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7520ba92-5020-48d1-8d1c-fa20f0f407be-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-85c477fd8f-7ttsl\" (UID: \"7520ba92-5020-48d1-8d1c-fa20f0f407be\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c477fd8f-7ttsl" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.536837 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7520ba92-5020-48d1-8d1c-fa20f0f407be-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-85c477fd8f-7ttsl\" (UID: \"7520ba92-5020-48d1-8d1c-fa20f0f407be\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c477fd8f-7ttsl" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.563559 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-nh5dg"] Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.564536 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-nh5dg" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.568521 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.568891 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-x7fg9" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.572503 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c477fd8f-d52bp" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.582214 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-nh5dg"] Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.628505 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c477fd8f-7ttsl" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.633092 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/28a4594d-a811-4533-8d77-40267a80c581-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-nh5dg\" (UID: \"28a4594d-a811-4533-8d77-40267a80c581\") " pod="openshift-operators/observability-operator-6dd7dd855f-nh5dg" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.634052 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-422mh\" (UniqueName: \"kubernetes.io/projected/28a4594d-a811-4533-8d77-40267a80c581-kube-api-access-422mh\") pod \"observability-operator-6dd7dd855f-nh5dg\" (UID: \"28a4594d-a811-4533-8d77-40267a80c581\") " pod="openshift-operators/observability-operator-6dd7dd855f-nh5dg" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.735964 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/28a4594d-a811-4533-8d77-40267a80c581-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-nh5dg\" (UID: \"28a4594d-a811-4533-8d77-40267a80c581\") " pod="openshift-operators/observability-operator-6dd7dd855f-nh5dg" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.736385 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-422mh\" (UniqueName: \"kubernetes.io/projected/28a4594d-a811-4533-8d77-40267a80c581-kube-api-access-422mh\") pod \"observability-operator-6dd7dd855f-nh5dg\" (UID: \"28a4594d-a811-4533-8d77-40267a80c581\") " pod="openshift-operators/observability-operator-6dd7dd855f-nh5dg" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.740074 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/28a4594d-a811-4533-8d77-40267a80c581-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-nh5dg\" (UID: \"28a4594d-a811-4533-8d77-40267a80c581\") " pod="openshift-operators/observability-operator-6dd7dd855f-nh5dg" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.755715 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-422mh\" (UniqueName: \"kubernetes.io/projected/28a4594d-a811-4533-8d77-40267a80c581-kube-api-access-422mh\") pod \"observability-operator-6dd7dd855f-nh5dg\" (UID: \"28a4594d-a811-4533-8d77-40267a80c581\") " pod="openshift-operators/observability-operator-6dd7dd855f-nh5dg" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.861667 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-85c477fd8f-d52bp"] Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.889308 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-nh5dg" Mar 20 15:52:51 crc kubenswrapper[4730]: I0320 15:52:51.955449 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-85c477fd8f-7ttsl"] Mar 20 15:52:51 crc kubenswrapper[4730]: W0320 15:52:51.975977 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7520ba92_5020_48d1_8d1c_fa20f0f407be.slice/crio-aae66df47fa8f48155e7be17a32c8324af3e7932ebdd68336912748a089e7057 WatchSource:0}: Error finding container aae66df47fa8f48155e7be17a32c8324af3e7932ebdd68336912748a089e7057: Status 404 returned error can't find the container with id aae66df47fa8f48155e7be17a32c8324af3e7932ebdd68336912748a089e7057 Mar 20 15:52:52 crc kubenswrapper[4730]: I0320 15:52:52.049783 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-6b8b4f7dbd-pmzhq"] Mar 20 15:52:52 crc kubenswrapper[4730]: I0320 15:52:52.050509 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-6b8b4f7dbd-pmzhq" Mar 20 15:52:52 crc kubenswrapper[4730]: I0320 15:52:52.055537 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-service-cert" Mar 20 15:52:52 crc kubenswrapper[4730]: I0320 15:52:52.055676 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-cj57w" Mar 20 15:52:52 crc kubenswrapper[4730]: I0320 15:52:52.096940 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-6b8b4f7dbd-pmzhq"] Mar 20 15:52:52 crc kubenswrapper[4730]: I0320 15:52:52.146115 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/50aad4a2-a828-49d9-9bb3-115336081293-apiservice-cert\") pod \"perses-operator-6b8b4f7dbd-pmzhq\" (UID: \"50aad4a2-a828-49d9-9bb3-115336081293\") " pod="openshift-operators/perses-operator-6b8b4f7dbd-pmzhq" Mar 20 15:52:52 crc kubenswrapper[4730]: I0320 15:52:52.146203 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/50aad4a2-a828-49d9-9bb3-115336081293-openshift-service-ca\") pod \"perses-operator-6b8b4f7dbd-pmzhq\" (UID: \"50aad4a2-a828-49d9-9bb3-115336081293\") " pod="openshift-operators/perses-operator-6b8b4f7dbd-pmzhq" Mar 20 15:52:52 crc kubenswrapper[4730]: I0320 15:52:52.146280 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/50aad4a2-a828-49d9-9bb3-115336081293-webhook-cert\") pod \"perses-operator-6b8b4f7dbd-pmzhq\" (UID: \"50aad4a2-a828-49d9-9bb3-115336081293\") " pod="openshift-operators/perses-operator-6b8b4f7dbd-pmzhq" Mar 20 15:52:52 crc kubenswrapper[4730]: I0320 15:52:52.146309 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfpb9\" (UniqueName: \"kubernetes.io/projected/50aad4a2-a828-49d9-9bb3-115336081293-kube-api-access-gfpb9\") pod \"perses-operator-6b8b4f7dbd-pmzhq\" (UID: \"50aad4a2-a828-49d9-9bb3-115336081293\") " pod="openshift-operators/perses-operator-6b8b4f7dbd-pmzhq" Mar 20 15:52:52 crc kubenswrapper[4730]: I0320 15:52:52.153596 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-nh5dg"] Mar 20 15:52:52 crc kubenswrapper[4730]: W0320 15:52:52.160654 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28a4594d_a811_4533_8d77_40267a80c581.slice/crio-d7cc550886367b76d3e6989f1295f791d379fae6bc9b1b9d5a180e3b7b8e1af3 WatchSource:0}: Error finding container d7cc550886367b76d3e6989f1295f791d379fae6bc9b1b9d5a180e3b7b8e1af3: Status 404 returned error can't find the container with id d7cc550886367b76d3e6989f1295f791d379fae6bc9b1b9d5a180e3b7b8e1af3 Mar 20 15:52:52 crc kubenswrapper[4730]: I0320 15:52:52.247561 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/50aad4a2-a828-49d9-9bb3-115336081293-apiservice-cert\") pod \"perses-operator-6b8b4f7dbd-pmzhq\" (UID: \"50aad4a2-a828-49d9-9bb3-115336081293\") " pod="openshift-operators/perses-operator-6b8b4f7dbd-pmzhq" Mar 20 15:52:52 crc kubenswrapper[4730]: I0320 15:52:52.247605 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/50aad4a2-a828-49d9-9bb3-115336081293-openshift-service-ca\") pod \"perses-operator-6b8b4f7dbd-pmzhq\" (UID: \"50aad4a2-a828-49d9-9bb3-115336081293\") " pod="openshift-operators/perses-operator-6b8b4f7dbd-pmzhq" Mar 20 15:52:52 crc kubenswrapper[4730]: I0320 15:52:52.247637 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/50aad4a2-a828-49d9-9bb3-115336081293-webhook-cert\") pod \"perses-operator-6b8b4f7dbd-pmzhq\" (UID: \"50aad4a2-a828-49d9-9bb3-115336081293\") " pod="openshift-operators/perses-operator-6b8b4f7dbd-pmzhq" Mar 20 15:52:52 crc kubenswrapper[4730]: I0320 15:52:52.247659 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfpb9\" (UniqueName: \"kubernetes.io/projected/50aad4a2-a828-49d9-9bb3-115336081293-kube-api-access-gfpb9\") pod \"perses-operator-6b8b4f7dbd-pmzhq\" (UID: \"50aad4a2-a828-49d9-9bb3-115336081293\") " pod="openshift-operators/perses-operator-6b8b4f7dbd-pmzhq" Mar 20 15:52:52 crc kubenswrapper[4730]: I0320 15:52:52.248539 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/50aad4a2-a828-49d9-9bb3-115336081293-openshift-service-ca\") pod \"perses-operator-6b8b4f7dbd-pmzhq\" (UID: \"50aad4a2-a828-49d9-9bb3-115336081293\") " pod="openshift-operators/perses-operator-6b8b4f7dbd-pmzhq" Mar 20 15:52:52 crc kubenswrapper[4730]: I0320 15:52:52.253392 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/50aad4a2-a828-49d9-9bb3-115336081293-apiservice-cert\") pod \"perses-operator-6b8b4f7dbd-pmzhq\" (UID: \"50aad4a2-a828-49d9-9bb3-115336081293\") " pod="openshift-operators/perses-operator-6b8b4f7dbd-pmzhq" Mar 20 15:52:52 crc kubenswrapper[4730]: I0320 15:52:52.264010 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/50aad4a2-a828-49d9-9bb3-115336081293-webhook-cert\") pod \"perses-operator-6b8b4f7dbd-pmzhq\" (UID: \"50aad4a2-a828-49d9-9bb3-115336081293\") " pod="openshift-operators/perses-operator-6b8b4f7dbd-pmzhq" Mar 20 15:52:52 crc kubenswrapper[4730]: I0320 15:52:52.274937 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfpb9\" (UniqueName: \"kubernetes.io/projected/50aad4a2-a828-49d9-9bb3-115336081293-kube-api-access-gfpb9\") pod \"perses-operator-6b8b4f7dbd-pmzhq\" (UID: \"50aad4a2-a828-49d9-9bb3-115336081293\") " pod="openshift-operators/perses-operator-6b8b4f7dbd-pmzhq" Mar 20 15:52:52 crc kubenswrapper[4730]: I0320 15:52:52.290465 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-nh5dg" event={"ID":"28a4594d-a811-4533-8d77-40267a80c581","Type":"ContainerStarted","Data":"d7cc550886367b76d3e6989f1295f791d379fae6bc9b1b9d5a180e3b7b8e1af3"} Mar 20 15:52:52 crc kubenswrapper[4730]: I0320 15:52:52.291438 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-8ff7d675-w67vt" event={"ID":"5db89423-34f0-46c3-9dcf-2179c6c6f42a","Type":"ContainerStarted","Data":"812c1ec4f28711c2f69e7a6eaa82c5c42f9ab5eefb0d889061a47292630d3101"} Mar 20 15:52:52 crc kubenswrapper[4730]: I0320 15:52:52.292519 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c477fd8f-d52bp" event={"ID":"c12d2a2b-f7db-41be-89e1-97869c8119c2","Type":"ContainerStarted","Data":"2748a7a4f4fc3bb838f537be59e8e63e9fae98e41e8853f3b704c7a55ffc4554"} Mar 20 15:52:52 crc kubenswrapper[4730]: I0320 15:52:52.293504 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c477fd8f-7ttsl" event={"ID":"7520ba92-5020-48d1-8d1c-fa20f0f407be","Type":"ContainerStarted","Data":"aae66df47fa8f48155e7be17a32c8324af3e7932ebdd68336912748a089e7057"} Mar 20 15:52:52 crc kubenswrapper[4730]: I0320 15:52:52.395232 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-6b8b4f7dbd-pmzhq" Mar 20 15:52:52 crc kubenswrapper[4730]: I0320 15:52:52.582108 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-6b8b4f7dbd-pmzhq"] Mar 20 15:52:52 crc kubenswrapper[4730]: W0320 15:52:52.588893 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50aad4a2_a828_49d9_9bb3_115336081293.slice/crio-ae6611d5cf24af0aa2c26d567d53fdc5b2d3e9c0ed92191af4ac29cf066b840f WatchSource:0}: Error finding container ae6611d5cf24af0aa2c26d567d53fdc5b2d3e9c0ed92191af4ac29cf066b840f: Status 404 returned error can't find the container with id ae6611d5cf24af0aa2c26d567d53fdc5b2d3e9c0ed92191af4ac29cf066b840f Mar 20 15:52:53 crc kubenswrapper[4730]: I0320 15:52:53.185970 4730 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 20 15:52:53 crc kubenswrapper[4730]: I0320 15:52:53.300121 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-6b8b4f7dbd-pmzhq" event={"ID":"50aad4a2-a828-49d9-9bb3-115336081293","Type":"ContainerStarted","Data":"ae6611d5cf24af0aa2c26d567d53fdc5b2d3e9c0ed92191af4ac29cf066b840f"} Mar 20 15:53:03 crc kubenswrapper[4730]: I0320 15:53:03.414602 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-nh5dg" event={"ID":"28a4594d-a811-4533-8d77-40267a80c581","Type":"ContainerStarted","Data":"54eda06c2dd0c542ff66769d245e46a214a066c5fb29a8837847261e5595fced"} Mar 20 15:53:03 crc kubenswrapper[4730]: I0320 15:53:03.415119 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-6dd7dd855f-nh5dg" Mar 20 15:53:03 crc kubenswrapper[4730]: I0320 15:53:03.418290 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-6b8b4f7dbd-pmzhq" event={"ID":"50aad4a2-a828-49d9-9bb3-115336081293","Type":"ContainerStarted","Data":"756db39c91a41f4d6b97f77ab6b32f7be09b7bff7ffc8c60c7ca44b3aaf2e42d"} Mar 20 15:53:03 crc kubenswrapper[4730]: I0320 15:53:03.418372 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-6b8b4f7dbd-pmzhq" Mar 20 15:53:03 crc kubenswrapper[4730]: I0320 15:53:03.419836 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-8ff7d675-w67vt" event={"ID":"5db89423-34f0-46c3-9dcf-2179c6c6f42a","Type":"ContainerStarted","Data":"003e343a94dd002b77923c0bf46adad79ef3ccdd07ba9e75c36ca001f3f83214"} Mar 20 15:53:03 crc kubenswrapper[4730]: I0320 15:53:03.421163 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c477fd8f-d52bp" event={"ID":"c12d2a2b-f7db-41be-89e1-97869c8119c2","Type":"ContainerStarted","Data":"b77e7de2158f5672a943cd651cb96e011621e76175ed6cd68e1c500505b74a87"} Mar 20 15:53:03 crc kubenswrapper[4730]: I0320 15:53:03.422550 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c477fd8f-7ttsl" event={"ID":"7520ba92-5020-48d1-8d1c-fa20f0f407be","Type":"ContainerStarted","Data":"93c5db9de95b72bb550f0e835cfd0d2bc9c44293ec8c3368581c658a985dae9a"} Mar 20 15:53:03 crc kubenswrapper[4730]: I0320 15:53:03.432890 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-6dd7dd855f-nh5dg" podStartSLOduration=2.007650781 podStartE2EDuration="12.43287103s" podCreationTimestamp="2026-03-20 15:52:51 +0000 UTC" firstStartedPulling="2026-03-20 15:52:52.163471061 +0000 UTC m=+831.376842430" lastFinishedPulling="2026-03-20 15:53:02.58869131 +0000 UTC m=+841.802062679" observedRunningTime="2026-03-20 15:53:03.430465122 +0000 UTC m=+842.643836481" watchObservedRunningTime="2026-03-20 15:53:03.43287103 +0000 UTC m=+842.646242399" Mar 20 15:53:03 crc kubenswrapper[4730]: I0320 15:53:03.446534 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-8ff7d675-w67vt" podStartSLOduration=2.357008322 podStartE2EDuration="13.446515297s" podCreationTimestamp="2026-03-20 15:52:50 +0000 UTC" firstStartedPulling="2026-03-20 15:52:51.44048397 +0000 UTC m=+830.653855349" lastFinishedPulling="2026-03-20 15:53:02.529990955 +0000 UTC m=+841.743362324" observedRunningTime="2026-03-20 15:53:03.44590969 +0000 UTC m=+842.659281059" watchObservedRunningTime="2026-03-20 15:53:03.446515297 +0000 UTC m=+842.659886656" Mar 20 15:53:03 crc kubenswrapper[4730]: I0320 15:53:03.478544 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c477fd8f-d52bp" podStartSLOduration=1.835109015 podStartE2EDuration="12.478524825s" podCreationTimestamp="2026-03-20 15:52:51 +0000 UTC" firstStartedPulling="2026-03-20 15:52:51.888475619 +0000 UTC m=+831.101846998" lastFinishedPulling="2026-03-20 15:53:02.531891439 +0000 UTC m=+841.745262808" observedRunningTime="2026-03-20 15:53:03.473398889 +0000 UTC m=+842.686770278" watchObservedRunningTime="2026-03-20 15:53:03.478524825 +0000 UTC m=+842.691896194" Mar 20 15:53:03 crc kubenswrapper[4730]: I0320 15:53:03.506645 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-6dd7dd855f-nh5dg" Mar 20 15:53:03 crc kubenswrapper[4730]: I0320 15:53:03.513504 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c477fd8f-7ttsl" podStartSLOduration=1.9424532110000001 podStartE2EDuration="12.513482497s" podCreationTimestamp="2026-03-20 15:52:51 +0000 UTC" firstStartedPulling="2026-03-20 15:52:51.979371868 +0000 UTC m=+831.192743237" lastFinishedPulling="2026-03-20 15:53:02.550401154 +0000 UTC m=+841.763772523" observedRunningTime="2026-03-20 15:53:03.507981061 +0000 UTC m=+842.721352420" watchObservedRunningTime="2026-03-20 15:53:03.513482497 +0000 UTC m=+842.726853866" Mar 20 15:53:03 crc kubenswrapper[4730]: I0320 15:53:03.526778 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-6b8b4f7dbd-pmzhq" podStartSLOduration=1.5637438160000001 podStartE2EDuration="11.526761203s" podCreationTimestamp="2026-03-20 15:52:52 +0000 UTC" firstStartedPulling="2026-03-20 15:52:52.591267667 +0000 UTC m=+831.804639036" lastFinishedPulling="2026-03-20 15:53:02.554285054 +0000 UTC m=+841.767656423" observedRunningTime="2026-03-20 15:53:03.526734013 +0000 UTC m=+842.740105382" watchObservedRunningTime="2026-03-20 15:53:03.526761203 +0000 UTC m=+842.740132562" Mar 20 15:53:12 crc kubenswrapper[4730]: I0320 15:53:12.398238 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-6b8b4f7dbd-pmzhq" Mar 20 15:53:28 crc kubenswrapper[4730]: I0320 15:53:28.350086 4730 scope.go:117] "RemoveContainer" containerID="b5ebe6b01434979e266e3872ff5405b028a732d1dd5830a3d6f3ad270518946a" Mar 20 15:53:28 crc kubenswrapper[4730]: I0320 15:53:28.927445 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9"] Mar 20 15:53:28 crc kubenswrapper[4730]: I0320 15:53:28.928494 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9" Mar 20 15:53:28 crc kubenswrapper[4730]: I0320 15:53:28.930320 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 20 15:53:28 crc kubenswrapper[4730]: I0320 15:53:28.943515 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9"] Mar 20 15:53:29 crc kubenswrapper[4730]: I0320 15:53:29.061322 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6caa320c-cdca-4f52-aac0-b5c3325396db-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9\" (UID: \"6caa320c-cdca-4f52-aac0-b5c3325396db\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9" Mar 20 15:53:29 crc kubenswrapper[4730]: I0320 15:53:29.061405 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6caa320c-cdca-4f52-aac0-b5c3325396db-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9\" (UID: \"6caa320c-cdca-4f52-aac0-b5c3325396db\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9" Mar 20 15:53:29 crc kubenswrapper[4730]: I0320 15:53:29.061560 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k8w4\" (UniqueName: \"kubernetes.io/projected/6caa320c-cdca-4f52-aac0-b5c3325396db-kube-api-access-4k8w4\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9\" (UID: \"6caa320c-cdca-4f52-aac0-b5c3325396db\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9" Mar 20 15:53:29 crc kubenswrapper[4730]: I0320 15:53:29.162807 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6caa320c-cdca-4f52-aac0-b5c3325396db-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9\" (UID: \"6caa320c-cdca-4f52-aac0-b5c3325396db\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9" Mar 20 15:53:29 crc kubenswrapper[4730]: I0320 15:53:29.162864 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k8w4\" (UniqueName: \"kubernetes.io/projected/6caa320c-cdca-4f52-aac0-b5c3325396db-kube-api-access-4k8w4\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9\" (UID: \"6caa320c-cdca-4f52-aac0-b5c3325396db\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9" Mar 20 15:53:29 crc kubenswrapper[4730]: I0320 15:53:29.162911 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6caa320c-cdca-4f52-aac0-b5c3325396db-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9\" (UID: \"6caa320c-cdca-4f52-aac0-b5c3325396db\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9" Mar 20 15:53:29 crc kubenswrapper[4730]: I0320 15:53:29.163347 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6caa320c-cdca-4f52-aac0-b5c3325396db-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9\" (UID: \"6caa320c-cdca-4f52-aac0-b5c3325396db\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9" Mar 20 15:53:29 crc kubenswrapper[4730]: I0320 15:53:29.163437 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6caa320c-cdca-4f52-aac0-b5c3325396db-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9\" (UID: \"6caa320c-cdca-4f52-aac0-b5c3325396db\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9" Mar 20 15:53:29 crc kubenswrapper[4730]: I0320 15:53:29.181650 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k8w4\" (UniqueName: \"kubernetes.io/projected/6caa320c-cdca-4f52-aac0-b5c3325396db-kube-api-access-4k8w4\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9\" (UID: \"6caa320c-cdca-4f52-aac0-b5c3325396db\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9" Mar 20 15:53:29 crc kubenswrapper[4730]: I0320 15:53:29.244417 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9" Mar 20 15:53:29 crc kubenswrapper[4730]: I0320 15:53:29.676200 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9"] Mar 20 15:53:30 crc kubenswrapper[4730]: I0320 15:53:30.576485 4730 generic.go:334] "Generic (PLEG): container finished" podID="6caa320c-cdca-4f52-aac0-b5c3325396db" containerID="f3e0533f6d3c39314c123d1b22008e9474bf3e93a3210aaa3ac390f338322834" exitCode=0 Mar 20 15:53:30 crc kubenswrapper[4730]: I0320 15:53:30.576537 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9" event={"ID":"6caa320c-cdca-4f52-aac0-b5c3325396db","Type":"ContainerDied","Data":"f3e0533f6d3c39314c123d1b22008e9474bf3e93a3210aaa3ac390f338322834"} Mar 20 15:53:30 crc kubenswrapper[4730]: I0320 15:53:30.576589 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9" event={"ID":"6caa320c-cdca-4f52-aac0-b5c3325396db","Type":"ContainerStarted","Data":"1d9fe02310e187ee215b30ee661f2d9f0de87857b32d76f159748ea35fbc0a01"} Mar 20 15:53:30 crc kubenswrapper[4730]: I0320 15:53:30.578029 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 15:53:31 crc kubenswrapper[4730]: I0320 15:53:31.305006 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p4mms"] Mar 20 15:53:31 crc kubenswrapper[4730]: I0320 15:53:31.310151 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p4mms" Mar 20 15:53:31 crc kubenswrapper[4730]: I0320 15:53:31.335315 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p4mms"] Mar 20 15:53:31 crc kubenswrapper[4730]: I0320 15:53:31.596214 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f51e780f-650b-45d8-a2c3-b6b73ce74c61-catalog-content\") pod \"redhat-operators-p4mms\" (UID: \"f51e780f-650b-45d8-a2c3-b6b73ce74c61\") " pod="openshift-marketplace/redhat-operators-p4mms" Mar 20 15:53:31 crc kubenswrapper[4730]: I0320 15:53:31.596353 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f51e780f-650b-45d8-a2c3-b6b73ce74c61-utilities\") pod \"redhat-operators-p4mms\" (UID: \"f51e780f-650b-45d8-a2c3-b6b73ce74c61\") " pod="openshift-marketplace/redhat-operators-p4mms" Mar 20 15:53:31 crc kubenswrapper[4730]: I0320 15:53:31.596382 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55wwc\" (UniqueName: \"kubernetes.io/projected/f51e780f-650b-45d8-a2c3-b6b73ce74c61-kube-api-access-55wwc\") pod \"redhat-operators-p4mms\" (UID: \"f51e780f-650b-45d8-a2c3-b6b73ce74c61\") " pod="openshift-marketplace/redhat-operators-p4mms" Mar 20 15:53:31 crc kubenswrapper[4730]: I0320 15:53:31.697501 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f51e780f-650b-45d8-a2c3-b6b73ce74c61-catalog-content\") pod \"redhat-operators-p4mms\" (UID: \"f51e780f-650b-45d8-a2c3-b6b73ce74c61\") " pod="openshift-marketplace/redhat-operators-p4mms" Mar 20 15:53:31 crc kubenswrapper[4730]: I0320 15:53:31.697622 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f51e780f-650b-45d8-a2c3-b6b73ce74c61-utilities\") pod \"redhat-operators-p4mms\" (UID: \"f51e780f-650b-45d8-a2c3-b6b73ce74c61\") " pod="openshift-marketplace/redhat-operators-p4mms" Mar 20 15:53:31 crc kubenswrapper[4730]: I0320 15:53:31.697646 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55wwc\" (UniqueName: \"kubernetes.io/projected/f51e780f-650b-45d8-a2c3-b6b73ce74c61-kube-api-access-55wwc\") pod \"redhat-operators-p4mms\" (UID: \"f51e780f-650b-45d8-a2c3-b6b73ce74c61\") " pod="openshift-marketplace/redhat-operators-p4mms" Mar 20 15:53:31 crc kubenswrapper[4730]: I0320 15:53:31.698492 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f51e780f-650b-45d8-a2c3-b6b73ce74c61-catalog-content\") pod \"redhat-operators-p4mms\" (UID: \"f51e780f-650b-45d8-a2c3-b6b73ce74c61\") " pod="openshift-marketplace/redhat-operators-p4mms" Mar 20 15:53:31 crc kubenswrapper[4730]: I0320 15:53:31.698499 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f51e780f-650b-45d8-a2c3-b6b73ce74c61-utilities\") pod \"redhat-operators-p4mms\" (UID: \"f51e780f-650b-45d8-a2c3-b6b73ce74c61\") " pod="openshift-marketplace/redhat-operators-p4mms" Mar 20 15:53:31 crc kubenswrapper[4730]: I0320 15:53:31.729895 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55wwc\" (UniqueName: \"kubernetes.io/projected/f51e780f-650b-45d8-a2c3-b6b73ce74c61-kube-api-access-55wwc\") pod \"redhat-operators-p4mms\" (UID: \"f51e780f-650b-45d8-a2c3-b6b73ce74c61\") " pod="openshift-marketplace/redhat-operators-p4mms" Mar 20 15:53:31 crc kubenswrapper[4730]: I0320 15:53:31.938219 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p4mms" Mar 20 15:53:32 crc kubenswrapper[4730]: I0320 15:53:32.162818 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p4mms"] Mar 20 15:53:32 crc kubenswrapper[4730]: W0320 15:53:32.174416 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf51e780f_650b_45d8_a2c3_b6b73ce74c61.slice/crio-c07e01328d7c5f71cfb88d67fc57544826757077ff694bb1f7eb716cf5cf5bd7 WatchSource:0}: Error finding container c07e01328d7c5f71cfb88d67fc57544826757077ff694bb1f7eb716cf5cf5bd7: Status 404 returned error can't find the container with id c07e01328d7c5f71cfb88d67fc57544826757077ff694bb1f7eb716cf5cf5bd7 Mar 20 15:53:32 crc kubenswrapper[4730]: I0320 15:53:32.587408 4730 generic.go:334] "Generic (PLEG): container finished" podID="f51e780f-650b-45d8-a2c3-b6b73ce74c61" containerID="a9b15d45bb34224d6766d66172ba74c36da9471e707ae2a90cea12d0f5f9aca2" exitCode=0 Mar 20 15:53:32 crc kubenswrapper[4730]: I0320 15:53:32.587487 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4mms" event={"ID":"f51e780f-650b-45d8-a2c3-b6b73ce74c61","Type":"ContainerDied","Data":"a9b15d45bb34224d6766d66172ba74c36da9471e707ae2a90cea12d0f5f9aca2"} Mar 20 15:53:32 crc kubenswrapper[4730]: I0320 15:53:32.587523 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4mms" event={"ID":"f51e780f-650b-45d8-a2c3-b6b73ce74c61","Type":"ContainerStarted","Data":"c07e01328d7c5f71cfb88d67fc57544826757077ff694bb1f7eb716cf5cf5bd7"} Mar 20 15:53:32 crc kubenswrapper[4730]: I0320 15:53:32.590139 4730 generic.go:334] "Generic (PLEG): container finished" podID="6caa320c-cdca-4f52-aac0-b5c3325396db" containerID="55a20d9cb2b67f689bac1999ea91b76b3f6b8df389a04a628d2ce58c841ccc10" exitCode=0 Mar 20 15:53:32 crc kubenswrapper[4730]: I0320 15:53:32.590186 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9" event={"ID":"6caa320c-cdca-4f52-aac0-b5c3325396db","Type":"ContainerDied","Data":"55a20d9cb2b67f689bac1999ea91b76b3f6b8df389a04a628d2ce58c841ccc10"} Mar 20 15:53:33 crc kubenswrapper[4730]: I0320 15:53:33.598711 4730 generic.go:334] "Generic (PLEG): container finished" podID="6caa320c-cdca-4f52-aac0-b5c3325396db" containerID="f50153e25970fd5b818891231f77a23e0800ff162b8e53679d315a70b67ad26d" exitCode=0 Mar 20 15:53:33 crc kubenswrapper[4730]: I0320 15:53:33.598813 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9" event={"ID":"6caa320c-cdca-4f52-aac0-b5c3325396db","Type":"ContainerDied","Data":"f50153e25970fd5b818891231f77a23e0800ff162b8e53679d315a70b67ad26d"} Mar 20 15:53:33 crc kubenswrapper[4730]: I0320 15:53:33.601047 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4mms" event={"ID":"f51e780f-650b-45d8-a2c3-b6b73ce74c61","Type":"ContainerStarted","Data":"f6fd4d176812b359067e665f23250f950ab5e29084994d678328733032dac222"} Mar 20 15:53:34 crc kubenswrapper[4730]: I0320 15:53:34.608448 4730 generic.go:334] "Generic (PLEG): container finished" podID="f51e780f-650b-45d8-a2c3-b6b73ce74c61" containerID="f6fd4d176812b359067e665f23250f950ab5e29084994d678328733032dac222" exitCode=0 Mar 20 15:53:34 crc kubenswrapper[4730]: I0320 15:53:34.609419 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4mms" event={"ID":"f51e780f-650b-45d8-a2c3-b6b73ce74c61","Type":"ContainerDied","Data":"f6fd4d176812b359067e665f23250f950ab5e29084994d678328733032dac222"} Mar 20 15:53:34 crc kubenswrapper[4730]: I0320 15:53:34.846117 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9" Mar 20 15:53:35 crc kubenswrapper[4730]: I0320 15:53:35.036878 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6caa320c-cdca-4f52-aac0-b5c3325396db-bundle\") pod \"6caa320c-cdca-4f52-aac0-b5c3325396db\" (UID: \"6caa320c-cdca-4f52-aac0-b5c3325396db\") " Mar 20 15:53:35 crc kubenswrapper[4730]: I0320 15:53:35.036918 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6caa320c-cdca-4f52-aac0-b5c3325396db-util\") pod \"6caa320c-cdca-4f52-aac0-b5c3325396db\" (UID: \"6caa320c-cdca-4f52-aac0-b5c3325396db\") " Mar 20 15:53:35 crc kubenswrapper[4730]: I0320 15:53:35.036982 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k8w4\" (UniqueName: \"kubernetes.io/projected/6caa320c-cdca-4f52-aac0-b5c3325396db-kube-api-access-4k8w4\") pod \"6caa320c-cdca-4f52-aac0-b5c3325396db\" (UID: \"6caa320c-cdca-4f52-aac0-b5c3325396db\") " Mar 20 15:53:35 crc kubenswrapper[4730]: I0320 15:53:35.037594 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6caa320c-cdca-4f52-aac0-b5c3325396db-bundle" (OuterVolumeSpecName: "bundle") pod "6caa320c-cdca-4f52-aac0-b5c3325396db" (UID: "6caa320c-cdca-4f52-aac0-b5c3325396db"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:53:35 crc kubenswrapper[4730]: I0320 15:53:35.043422 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6caa320c-cdca-4f52-aac0-b5c3325396db-kube-api-access-4k8w4" (OuterVolumeSpecName: "kube-api-access-4k8w4") pod "6caa320c-cdca-4f52-aac0-b5c3325396db" (UID: "6caa320c-cdca-4f52-aac0-b5c3325396db"). InnerVolumeSpecName "kube-api-access-4k8w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:53:35 crc kubenswrapper[4730]: I0320 15:53:35.050546 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6caa320c-cdca-4f52-aac0-b5c3325396db-util" (OuterVolumeSpecName: "util") pod "6caa320c-cdca-4f52-aac0-b5c3325396db" (UID: "6caa320c-cdca-4f52-aac0-b5c3325396db"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:53:35 crc kubenswrapper[4730]: I0320 15:53:35.138746 4730 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6caa320c-cdca-4f52-aac0-b5c3325396db-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 15:53:35 crc kubenswrapper[4730]: I0320 15:53:35.138807 4730 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6caa320c-cdca-4f52-aac0-b5c3325396db-util\") on node \"crc\" DevicePath \"\"" Mar 20 15:53:35 crc kubenswrapper[4730]: I0320 15:53:35.138833 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k8w4\" (UniqueName: \"kubernetes.io/projected/6caa320c-cdca-4f52-aac0-b5c3325396db-kube-api-access-4k8w4\") on node \"crc\" DevicePath \"\"" Mar 20 15:53:35 crc kubenswrapper[4730]: I0320 15:53:35.630461 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4mms" event={"ID":"f51e780f-650b-45d8-a2c3-b6b73ce74c61","Type":"ContainerStarted","Data":"dab2914bb52dccdd630edde42af76c8a7122f9d5ae1d800cd9af1744c1e01631"} Mar 20 15:53:35 crc kubenswrapper[4730]: I0320 15:53:35.637502 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9" event={"ID":"6caa320c-cdca-4f52-aac0-b5c3325396db","Type":"ContainerDied","Data":"1d9fe02310e187ee215b30ee661f2d9f0de87857b32d76f159748ea35fbc0a01"} Mar 20 15:53:35 crc kubenswrapper[4730]: I0320 15:53:35.637719 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d9fe02310e187ee215b30ee661f2d9f0de87857b32d76f159748ea35fbc0a01" Mar 20 15:53:35 crc kubenswrapper[4730]: I0320 15:53:35.637913 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9" Mar 20 15:53:35 crc kubenswrapper[4730]: I0320 15:53:35.657833 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p4mms" podStartSLOduration=2.110061269 podStartE2EDuration="4.657817117s" podCreationTimestamp="2026-03-20 15:53:31 +0000 UTC" firstStartedPulling="2026-03-20 15:53:32.589079423 +0000 UTC m=+871.802450792" lastFinishedPulling="2026-03-20 15:53:35.136835231 +0000 UTC m=+874.350206640" observedRunningTime="2026-03-20 15:53:35.65445351 +0000 UTC m=+874.867824879" watchObservedRunningTime="2026-03-20 15:53:35.657817117 +0000 UTC m=+874.871188486" Mar 20 15:53:37 crc kubenswrapper[4730]: I0320 15:53:37.881400 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-2qcpg"] Mar 20 15:53:37 crc kubenswrapper[4730]: E0320 15:53:37.882028 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6caa320c-cdca-4f52-aac0-b5c3325396db" containerName="util" Mar 20 15:53:37 crc kubenswrapper[4730]: I0320 15:53:37.882047 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="6caa320c-cdca-4f52-aac0-b5c3325396db" containerName="util" Mar 20 15:53:37 crc kubenswrapper[4730]: E0320 15:53:37.882077 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6caa320c-cdca-4f52-aac0-b5c3325396db" containerName="pull" Mar 20 15:53:37 crc kubenswrapper[4730]: I0320 15:53:37.882088 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="6caa320c-cdca-4f52-aac0-b5c3325396db" containerName="pull" Mar 20 15:53:37 crc kubenswrapper[4730]: E0320 15:53:37.882114 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6caa320c-cdca-4f52-aac0-b5c3325396db" containerName="extract" Mar 20 15:53:37 crc kubenswrapper[4730]: I0320 15:53:37.882125 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="6caa320c-cdca-4f52-aac0-b5c3325396db" containerName="extract" Mar 20 15:53:37 crc kubenswrapper[4730]: I0320 15:53:37.882302 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="6caa320c-cdca-4f52-aac0-b5c3325396db" containerName="extract" Mar 20 15:53:37 crc kubenswrapper[4730]: I0320 15:53:37.882906 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-2qcpg" Mar 20 15:53:37 crc kubenswrapper[4730]: I0320 15:53:37.886038 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 20 15:53:37 crc kubenswrapper[4730]: I0320 15:53:37.886058 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-cwsbf" Mar 20 15:53:37 crc kubenswrapper[4730]: I0320 15:53:37.886422 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 20 15:53:37 crc kubenswrapper[4730]: I0320 15:53:37.908108 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-2qcpg"] Mar 20 15:53:37 crc kubenswrapper[4730]: I0320 15:53:37.974559 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ssqv\" (UniqueName: \"kubernetes.io/projected/e3bdfb07-3f68-4262-8116-44b5ea591644-kube-api-access-8ssqv\") pod \"nmstate-operator-796d4cfff4-2qcpg\" (UID: \"e3bdfb07-3f68-4262-8116-44b5ea591644\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-2qcpg" Mar 20 15:53:38 crc kubenswrapper[4730]: I0320 15:53:38.075845 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ssqv\" (UniqueName: \"kubernetes.io/projected/e3bdfb07-3f68-4262-8116-44b5ea591644-kube-api-access-8ssqv\") pod \"nmstate-operator-796d4cfff4-2qcpg\" (UID: \"e3bdfb07-3f68-4262-8116-44b5ea591644\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-2qcpg" Mar 20 15:53:38 crc kubenswrapper[4730]: I0320 15:53:38.102043 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ssqv\" (UniqueName: \"kubernetes.io/projected/e3bdfb07-3f68-4262-8116-44b5ea591644-kube-api-access-8ssqv\") pod \"nmstate-operator-796d4cfff4-2qcpg\" (UID: \"e3bdfb07-3f68-4262-8116-44b5ea591644\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-2qcpg" Mar 20 15:53:38 crc kubenswrapper[4730]: I0320 15:53:38.203915 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-2qcpg" Mar 20 15:53:38 crc kubenswrapper[4730]: I0320 15:53:38.688048 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-2qcpg"] Mar 20 15:53:39 crc kubenswrapper[4730]: I0320 15:53:39.657445 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-2qcpg" event={"ID":"e3bdfb07-3f68-4262-8116-44b5ea591644","Type":"ContainerStarted","Data":"a4a50bfe819fc07c9c4b0f05d1cb2bbbb4acbed49d72f5c9bb9d4579267c7e17"} Mar 20 15:53:41 crc kubenswrapper[4730]: I0320 15:53:41.939191 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p4mms" Mar 20 15:53:41 crc kubenswrapper[4730]: I0320 15:53:41.939559 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p4mms" Mar 20 15:53:41 crc kubenswrapper[4730]: I0320 15:53:41.977830 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p4mms" Mar 20 15:53:42 crc kubenswrapper[4730]: I0320 15:53:42.678487 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-2qcpg" event={"ID":"e3bdfb07-3f68-4262-8116-44b5ea591644","Type":"ContainerStarted","Data":"3a876768ab3e96f444d6f3a493c8af504a7a4838ec7ea5e6f6224f5c1a0e4d1f"} Mar 20 15:53:42 crc kubenswrapper[4730]: I0320 15:53:42.746641 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p4mms" Mar 20 15:53:42 crc kubenswrapper[4730]: I0320 15:53:42.769233 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-2qcpg" podStartSLOduration=2.473302517 podStartE2EDuration="5.769209832s" podCreationTimestamp="2026-03-20 15:53:37 +0000 UTC" firstStartedPulling="2026-03-20 15:53:38.703121198 +0000 UTC m=+877.916492567" lastFinishedPulling="2026-03-20 15:53:41.999028513 +0000 UTC m=+881.212399882" observedRunningTime="2026-03-20 15:53:42.709393885 +0000 UTC m=+881.922765264" watchObservedRunningTime="2026-03-20 15:53:42.769209832 +0000 UTC m=+881.982581211" Mar 20 15:53:42 crc kubenswrapper[4730]: I0320 15:53:42.880037 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 15:53:42 crc kubenswrapper[4730]: I0320 15:53:42.880087 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.723337 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-nfr9k"] Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.729813 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-nfr9k" Mar 20 15:53:43 crc kubenswrapper[4730]: W0320 15:53:43.731189 4730 reflector.go:561] object-"openshift-nmstate"/"nmstate-handler-dockercfg-vc8sk": failed to list *v1.Secret: secrets "nmstate-handler-dockercfg-vc8sk" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-nmstate": no relationship found between node 'crc' and this object Mar 20 15:53:43 crc kubenswrapper[4730]: E0320 15:53:43.731277 4730 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"nmstate-handler-dockercfg-vc8sk\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"nmstate-handler-dockercfg-vc8sk\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-nmstate\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.731331 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-nq6dd"] Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.732468 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-nq6dd" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.739196 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-nfr9k"] Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.740489 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.744986 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-6tdt2"] Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.761615 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-nq6dd"] Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.778727 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-6tdt2" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.841169 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-nnrp6"] Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.842089 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nnrp6" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.846537 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.846719 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.846831 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-s7htn" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.851455 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-nnrp6"] Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.862150 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0a4f6fcf-7c76-49cf-8f3c-d83879a650f1-dbus-socket\") pod \"nmstate-handler-6tdt2\" (UID: \"0a4f6fcf-7c76-49cf-8f3c-d83879a650f1\") " pod="openshift-nmstate/nmstate-handler-6tdt2" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.862202 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0f827638-33ac-4f99-920b-6e9b72db7955-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-nq6dd\" (UID: \"0f827638-33ac-4f99-920b-6e9b72db7955\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-nq6dd" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.862240 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94hgn\" (UniqueName: \"kubernetes.io/projected/0f827638-33ac-4f99-920b-6e9b72db7955-kube-api-access-94hgn\") pod \"nmstate-webhook-5f558f5558-nq6dd\" (UID: \"0f827638-33ac-4f99-920b-6e9b72db7955\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-nq6dd" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.862293 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgv4s\" (UniqueName: \"kubernetes.io/projected/0a4f6fcf-7c76-49cf-8f3c-d83879a650f1-kube-api-access-lgv4s\") pod \"nmstate-handler-6tdt2\" (UID: \"0a4f6fcf-7c76-49cf-8f3c-d83879a650f1\") " pod="openshift-nmstate/nmstate-handler-6tdt2" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.862338 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0a4f6fcf-7c76-49cf-8f3c-d83879a650f1-nmstate-lock\") pod \"nmstate-handler-6tdt2\" (UID: \"0a4f6fcf-7c76-49cf-8f3c-d83879a650f1\") " pod="openshift-nmstate/nmstate-handler-6tdt2" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.862362 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p9rd\" (UniqueName: \"kubernetes.io/projected/3f50a695-6f8b-42e6-aa4f-3dfd888b6afa-kube-api-access-2p9rd\") pod \"nmstate-metrics-9b8c8685d-nfr9k\" (UID: \"3f50a695-6f8b-42e6-aa4f-3dfd888b6afa\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-nfr9k" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.862387 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0a4f6fcf-7c76-49cf-8f3c-d83879a650f1-ovs-socket\") pod \"nmstate-handler-6tdt2\" (UID: \"0a4f6fcf-7c76-49cf-8f3c-d83879a650f1\") " pod="openshift-nmstate/nmstate-handler-6tdt2" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.963112 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94hgn\" (UniqueName: \"kubernetes.io/projected/0f827638-33ac-4f99-920b-6e9b72db7955-kube-api-access-94hgn\") pod \"nmstate-webhook-5f558f5558-nq6dd\" (UID: \"0f827638-33ac-4f99-920b-6e9b72db7955\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-nq6dd" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.963167 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgv4s\" (UniqueName: \"kubernetes.io/projected/0a4f6fcf-7c76-49cf-8f3c-d83879a650f1-kube-api-access-lgv4s\") pod \"nmstate-handler-6tdt2\" (UID: \"0a4f6fcf-7c76-49cf-8f3c-d83879a650f1\") " pod="openshift-nmstate/nmstate-handler-6tdt2" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.963196 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g5nq\" (UniqueName: \"kubernetes.io/projected/663e9228-322c-4d6a-8988-0033d5dd587a-kube-api-access-7g5nq\") pod \"nmstate-console-plugin-86f58fcf4-nnrp6\" (UID: \"663e9228-322c-4d6a-8988-0033d5dd587a\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nnrp6" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.963217 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0a4f6fcf-7c76-49cf-8f3c-d83879a650f1-nmstate-lock\") pod \"nmstate-handler-6tdt2\" (UID: \"0a4f6fcf-7c76-49cf-8f3c-d83879a650f1\") " pod="openshift-nmstate/nmstate-handler-6tdt2" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.963235 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p9rd\" (UniqueName: \"kubernetes.io/projected/3f50a695-6f8b-42e6-aa4f-3dfd888b6afa-kube-api-access-2p9rd\") pod \"nmstate-metrics-9b8c8685d-nfr9k\" (UID: \"3f50a695-6f8b-42e6-aa4f-3dfd888b6afa\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-nfr9k" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.963294 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0a4f6fcf-7c76-49cf-8f3c-d83879a650f1-ovs-socket\") pod \"nmstate-handler-6tdt2\" (UID: \"0a4f6fcf-7c76-49cf-8f3c-d83879a650f1\") " pod="openshift-nmstate/nmstate-handler-6tdt2" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.963321 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/663e9228-322c-4d6a-8988-0033d5dd587a-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-nnrp6\" (UID: \"663e9228-322c-4d6a-8988-0033d5dd587a\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nnrp6" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.963368 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/663e9228-322c-4d6a-8988-0033d5dd587a-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-nnrp6\" (UID: \"663e9228-322c-4d6a-8988-0033d5dd587a\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nnrp6" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.963397 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0a4f6fcf-7c76-49cf-8f3c-d83879a650f1-dbus-socket\") pod \"nmstate-handler-6tdt2\" (UID: \"0a4f6fcf-7c76-49cf-8f3c-d83879a650f1\") " pod="openshift-nmstate/nmstate-handler-6tdt2" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.963427 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0f827638-33ac-4f99-920b-6e9b72db7955-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-nq6dd\" (UID: \"0f827638-33ac-4f99-920b-6e9b72db7955\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-nq6dd" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.964270 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0a4f6fcf-7c76-49cf-8f3c-d83879a650f1-ovs-socket\") pod \"nmstate-handler-6tdt2\" (UID: \"0a4f6fcf-7c76-49cf-8f3c-d83879a650f1\") " pod="openshift-nmstate/nmstate-handler-6tdt2" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.964432 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0a4f6fcf-7c76-49cf-8f3c-d83879a650f1-nmstate-lock\") pod \"nmstate-handler-6tdt2\" (UID: \"0a4f6fcf-7c76-49cf-8f3c-d83879a650f1\") " pod="openshift-nmstate/nmstate-handler-6tdt2" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.964634 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0a4f6fcf-7c76-49cf-8f3c-d83879a650f1-dbus-socket\") pod \"nmstate-handler-6tdt2\" (UID: \"0a4f6fcf-7c76-49cf-8f3c-d83879a650f1\") " pod="openshift-nmstate/nmstate-handler-6tdt2" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.979576 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0f827638-33ac-4f99-920b-6e9b72db7955-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-nq6dd\" (UID: \"0f827638-33ac-4f99-920b-6e9b72db7955\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-nq6dd" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.987175 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94hgn\" (UniqueName: \"kubernetes.io/projected/0f827638-33ac-4f99-920b-6e9b72db7955-kube-api-access-94hgn\") pod \"nmstate-webhook-5f558f5558-nq6dd\" (UID: \"0f827638-33ac-4f99-920b-6e9b72db7955\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-nq6dd" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.987744 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgv4s\" (UniqueName: \"kubernetes.io/projected/0a4f6fcf-7c76-49cf-8f3c-d83879a650f1-kube-api-access-lgv4s\") pod \"nmstate-handler-6tdt2\" (UID: \"0a4f6fcf-7c76-49cf-8f3c-d83879a650f1\") " pod="openshift-nmstate/nmstate-handler-6tdt2" Mar 20 15:53:43 crc kubenswrapper[4730]: I0320 15:53:43.993070 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p9rd\" (UniqueName: \"kubernetes.io/projected/3f50a695-6f8b-42e6-aa4f-3dfd888b6afa-kube-api-access-2p9rd\") pod \"nmstate-metrics-9b8c8685d-nfr9k\" (UID: \"3f50a695-6f8b-42e6-aa4f-3dfd888b6afa\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-nfr9k" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.029108 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-854c989bdc-94fm2"] Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.029814 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-854c989bdc-94fm2" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.048528 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-854c989bdc-94fm2"] Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.064081 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/663e9228-322c-4d6a-8988-0033d5dd587a-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-nnrp6\" (UID: \"663e9228-322c-4d6a-8988-0033d5dd587a\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nnrp6" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.064158 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g5nq\" (UniqueName: \"kubernetes.io/projected/663e9228-322c-4d6a-8988-0033d5dd587a-kube-api-access-7g5nq\") pod \"nmstate-console-plugin-86f58fcf4-nnrp6\" (UID: \"663e9228-322c-4d6a-8988-0033d5dd587a\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nnrp6" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.064195 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/663e9228-322c-4d6a-8988-0033d5dd587a-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-nnrp6\" (UID: \"663e9228-322c-4d6a-8988-0033d5dd587a\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nnrp6" Mar 20 15:53:44 crc kubenswrapper[4730]: E0320 15:53:44.064513 4730 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Mar 20 15:53:44 crc kubenswrapper[4730]: E0320 15:53:44.064656 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/663e9228-322c-4d6a-8988-0033d5dd587a-plugin-serving-cert podName:663e9228-322c-4d6a-8988-0033d5dd587a nodeName:}" failed. No retries permitted until 2026-03-20 15:53:44.564637861 +0000 UTC m=+883.778009230 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/663e9228-322c-4d6a-8988-0033d5dd587a-plugin-serving-cert") pod "nmstate-console-plugin-86f58fcf4-nnrp6" (UID: "663e9228-322c-4d6a-8988-0033d5dd587a") : secret "plugin-serving-cert" not found Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.065066 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/663e9228-322c-4d6a-8988-0033d5dd587a-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-nnrp6\" (UID: \"663e9228-322c-4d6a-8988-0033d5dd587a\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nnrp6" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.079905 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g5nq\" (UniqueName: \"kubernetes.io/projected/663e9228-322c-4d6a-8988-0033d5dd587a-kube-api-access-7g5nq\") pod \"nmstate-console-plugin-86f58fcf4-nnrp6\" (UID: \"663e9228-322c-4d6a-8988-0033d5dd587a\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nnrp6" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.165427 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f423bde-b761-4c2a-8519-805e6e44d099-trusted-ca-bundle\") pod \"console-854c989bdc-94fm2\" (UID: \"1f423bde-b761-4c2a-8519-805e6e44d099\") " pod="openshift-console/console-854c989bdc-94fm2" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.165523 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1f423bde-b761-4c2a-8519-805e6e44d099-console-oauth-config\") pod \"console-854c989bdc-94fm2\" (UID: \"1f423bde-b761-4c2a-8519-805e6e44d099\") " pod="openshift-console/console-854c989bdc-94fm2" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.165569 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f423bde-b761-4c2a-8519-805e6e44d099-console-serving-cert\") pod \"console-854c989bdc-94fm2\" (UID: \"1f423bde-b761-4c2a-8519-805e6e44d099\") " pod="openshift-console/console-854c989bdc-94fm2" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.165601 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1f423bde-b761-4c2a-8519-805e6e44d099-service-ca\") pod \"console-854c989bdc-94fm2\" (UID: \"1f423bde-b761-4c2a-8519-805e6e44d099\") " pod="openshift-console/console-854c989bdc-94fm2" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.165702 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1f423bde-b761-4c2a-8519-805e6e44d099-oauth-serving-cert\") pod \"console-854c989bdc-94fm2\" (UID: \"1f423bde-b761-4c2a-8519-805e6e44d099\") " pod="openshift-console/console-854c989bdc-94fm2" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.165844 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngtgw\" (UniqueName: \"kubernetes.io/projected/1f423bde-b761-4c2a-8519-805e6e44d099-kube-api-access-ngtgw\") pod \"console-854c989bdc-94fm2\" (UID: \"1f423bde-b761-4c2a-8519-805e6e44d099\") " pod="openshift-console/console-854c989bdc-94fm2" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.165926 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1f423bde-b761-4c2a-8519-805e6e44d099-console-config\") pod \"console-854c989bdc-94fm2\" (UID: \"1f423bde-b761-4c2a-8519-805e6e44d099\") " pod="openshift-console/console-854c989bdc-94fm2" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.266796 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f423bde-b761-4c2a-8519-805e6e44d099-trusted-ca-bundle\") pod \"console-854c989bdc-94fm2\" (UID: \"1f423bde-b761-4c2a-8519-805e6e44d099\") " pod="openshift-console/console-854c989bdc-94fm2" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.266853 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1f423bde-b761-4c2a-8519-805e6e44d099-console-oauth-config\") pod \"console-854c989bdc-94fm2\" (UID: \"1f423bde-b761-4c2a-8519-805e6e44d099\") " pod="openshift-console/console-854c989bdc-94fm2" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.266882 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f423bde-b761-4c2a-8519-805e6e44d099-console-serving-cert\") pod \"console-854c989bdc-94fm2\" (UID: \"1f423bde-b761-4c2a-8519-805e6e44d099\") " pod="openshift-console/console-854c989bdc-94fm2" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.266907 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1f423bde-b761-4c2a-8519-805e6e44d099-service-ca\") pod \"console-854c989bdc-94fm2\" (UID: \"1f423bde-b761-4c2a-8519-805e6e44d099\") " pod="openshift-console/console-854c989bdc-94fm2" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.266922 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1f423bde-b761-4c2a-8519-805e6e44d099-oauth-serving-cert\") pod \"console-854c989bdc-94fm2\" (UID: \"1f423bde-b761-4c2a-8519-805e6e44d099\") " pod="openshift-console/console-854c989bdc-94fm2" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.266946 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngtgw\" (UniqueName: \"kubernetes.io/projected/1f423bde-b761-4c2a-8519-805e6e44d099-kube-api-access-ngtgw\") pod \"console-854c989bdc-94fm2\" (UID: \"1f423bde-b761-4c2a-8519-805e6e44d099\") " pod="openshift-console/console-854c989bdc-94fm2" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.266969 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1f423bde-b761-4c2a-8519-805e6e44d099-console-config\") pod \"console-854c989bdc-94fm2\" (UID: \"1f423bde-b761-4c2a-8519-805e6e44d099\") " pod="openshift-console/console-854c989bdc-94fm2" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.268794 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1f423bde-b761-4c2a-8519-805e6e44d099-oauth-serving-cert\") pod \"console-854c989bdc-94fm2\" (UID: \"1f423bde-b761-4c2a-8519-805e6e44d099\") " pod="openshift-console/console-854c989bdc-94fm2" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.268797 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1f423bde-b761-4c2a-8519-805e6e44d099-console-config\") pod \"console-854c989bdc-94fm2\" (UID: \"1f423bde-b761-4c2a-8519-805e6e44d099\") " pod="openshift-console/console-854c989bdc-94fm2" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.268951 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f423bde-b761-4c2a-8519-805e6e44d099-trusted-ca-bundle\") pod \"console-854c989bdc-94fm2\" (UID: \"1f423bde-b761-4c2a-8519-805e6e44d099\") " pod="openshift-console/console-854c989bdc-94fm2" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.269264 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1f423bde-b761-4c2a-8519-805e6e44d099-service-ca\") pod \"console-854c989bdc-94fm2\" (UID: \"1f423bde-b761-4c2a-8519-805e6e44d099\") " pod="openshift-console/console-854c989bdc-94fm2" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.270162 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f423bde-b761-4c2a-8519-805e6e44d099-console-serving-cert\") pod \"console-854c989bdc-94fm2\" (UID: \"1f423bde-b761-4c2a-8519-805e6e44d099\") " pod="openshift-console/console-854c989bdc-94fm2" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.271824 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1f423bde-b761-4c2a-8519-805e6e44d099-console-oauth-config\") pod \"console-854c989bdc-94fm2\" (UID: \"1f423bde-b761-4c2a-8519-805e6e44d099\") " pod="openshift-console/console-854c989bdc-94fm2" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.283900 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p4mms"] Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.288820 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngtgw\" (UniqueName: \"kubernetes.io/projected/1f423bde-b761-4c2a-8519-805e6e44d099-kube-api-access-ngtgw\") pod \"console-854c989bdc-94fm2\" (UID: \"1f423bde-b761-4c2a-8519-805e6e44d099\") " pod="openshift-console/console-854c989bdc-94fm2" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.347657 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-854c989bdc-94fm2" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.540002 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-vc8sk" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.541455 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-nfr9k" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.543815 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-6tdt2" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.549135 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-nq6dd" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.554421 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-854c989bdc-94fm2"] Mar 20 15:53:44 crc kubenswrapper[4730]: W0320 15:53:44.568879 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f423bde_b761_4c2a_8519_805e6e44d099.slice/crio-d7a11f205b9b4e991c5697a502c331607ca7da7d297ef34f6d29dd58e050d5c0 WatchSource:0}: Error finding container d7a11f205b9b4e991c5697a502c331607ca7da7d297ef34f6d29dd58e050d5c0: Status 404 returned error can't find the container with id d7a11f205b9b4e991c5697a502c331607ca7da7d297ef34f6d29dd58e050d5c0 Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.580105 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/663e9228-322c-4d6a-8988-0033d5dd587a-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-nnrp6\" (UID: \"663e9228-322c-4d6a-8988-0033d5dd587a\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nnrp6" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.585232 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/663e9228-322c-4d6a-8988-0033d5dd587a-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-nnrp6\" (UID: \"663e9228-322c-4d6a-8988-0033d5dd587a\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nnrp6" Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.691228 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-854c989bdc-94fm2" event={"ID":"1f423bde-b761-4c2a-8519-805e6e44d099","Type":"ContainerStarted","Data":"d7a11f205b9b4e991c5697a502c331607ca7da7d297ef34f6d29dd58e050d5c0"} Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.693532 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p4mms" podUID="f51e780f-650b-45d8-a2c3-b6b73ce74c61" containerName="registry-server" containerID="cri-o://dab2914bb52dccdd630edde42af76c8a7122f9d5ae1d800cd9af1744c1e01631" gracePeriod=2 Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.693823 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-6tdt2" event={"ID":"0a4f6fcf-7c76-49cf-8f3c-d83879a650f1","Type":"ContainerStarted","Data":"12aa4f4ac579ae5563b0af0d1dfafd2ecfade139943f37fabf1ec3558b1e5c94"} Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.759131 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-nfr9k"] Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.761486 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nnrp6" Mar 20 15:53:44 crc kubenswrapper[4730]: W0320 15:53:44.772352 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f50a695_6f8b_42e6_aa4f_3dfd888b6afa.slice/crio-f6577e316c5eb7230e82d9a69af5ad70a8d8731e367a973b3c69091fc1bfbe60 WatchSource:0}: Error finding container f6577e316c5eb7230e82d9a69af5ad70a8d8731e367a973b3c69091fc1bfbe60: Status 404 returned error can't find the container with id f6577e316c5eb7230e82d9a69af5ad70a8d8731e367a973b3c69091fc1bfbe60 Mar 20 15:53:44 crc kubenswrapper[4730]: I0320 15:53:44.827859 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-nq6dd"] Mar 20 15:53:45 crc kubenswrapper[4730]: I0320 15:53:45.171425 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-nnrp6"] Mar 20 15:53:45 crc kubenswrapper[4730]: W0320 15:53:45.175856 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod663e9228_322c_4d6a_8988_0033d5dd587a.slice/crio-d78d19844368244adb5bc78f42a72d85d41fc682757e6334e2b57ecc31edda9b WatchSource:0}: Error finding container d78d19844368244adb5bc78f42a72d85d41fc682757e6334e2b57ecc31edda9b: Status 404 returned error can't find the container with id d78d19844368244adb5bc78f42a72d85d41fc682757e6334e2b57ecc31edda9b Mar 20 15:53:45 crc kubenswrapper[4730]: I0320 15:53:45.701113 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-nfr9k" event={"ID":"3f50a695-6f8b-42e6-aa4f-3dfd888b6afa","Type":"ContainerStarted","Data":"f6577e316c5eb7230e82d9a69af5ad70a8d8731e367a973b3c69091fc1bfbe60"} Mar 20 15:53:45 crc kubenswrapper[4730]: I0320 15:53:45.704007 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-nq6dd" event={"ID":"0f827638-33ac-4f99-920b-6e9b72db7955","Type":"ContainerStarted","Data":"70e0ceee949e00f078fbb5706e2c9ceb5e1c711423fba4d4253b5870c75b584a"} Mar 20 15:53:45 crc kubenswrapper[4730]: I0320 15:53:45.705362 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nnrp6" event={"ID":"663e9228-322c-4d6a-8988-0033d5dd587a","Type":"ContainerStarted","Data":"d78d19844368244adb5bc78f42a72d85d41fc682757e6334e2b57ecc31edda9b"} Mar 20 15:53:45 crc kubenswrapper[4730]: I0320 15:53:45.706746 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-854c989bdc-94fm2" event={"ID":"1f423bde-b761-4c2a-8519-805e6e44d099","Type":"ContainerStarted","Data":"e41e26443fc6de31cd89d338531be24c3f63d731dacf5ce0ae960ef3f12588c3"} Mar 20 15:53:45 crc kubenswrapper[4730]: I0320 15:53:45.728315 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-854c989bdc-94fm2" podStartSLOduration=1.728295148 podStartE2EDuration="1.728295148s" podCreationTimestamp="2026-03-20 15:53:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:53:45.726241639 +0000 UTC m=+884.939613018" watchObservedRunningTime="2026-03-20 15:53:45.728295148 +0000 UTC m=+884.941666537" Mar 20 15:53:46 crc kubenswrapper[4730]: I0320 15:53:46.724656 4730 generic.go:334] "Generic (PLEG): container finished" podID="f51e780f-650b-45d8-a2c3-b6b73ce74c61" containerID="dab2914bb52dccdd630edde42af76c8a7122f9d5ae1d800cd9af1744c1e01631" exitCode=0 Mar 20 15:53:46 crc kubenswrapper[4730]: I0320 15:53:46.724733 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4mms" event={"ID":"f51e780f-650b-45d8-a2c3-b6b73ce74c61","Type":"ContainerDied","Data":"dab2914bb52dccdd630edde42af76c8a7122f9d5ae1d800cd9af1744c1e01631"} Mar 20 15:53:46 crc kubenswrapper[4730]: I0320 15:53:46.868217 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p4mms" Mar 20 15:53:47 crc kubenswrapper[4730]: I0320 15:53:47.014594 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55wwc\" (UniqueName: \"kubernetes.io/projected/f51e780f-650b-45d8-a2c3-b6b73ce74c61-kube-api-access-55wwc\") pod \"f51e780f-650b-45d8-a2c3-b6b73ce74c61\" (UID: \"f51e780f-650b-45d8-a2c3-b6b73ce74c61\") " Mar 20 15:53:47 crc kubenswrapper[4730]: I0320 15:53:47.015811 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f51e780f-650b-45d8-a2c3-b6b73ce74c61-utilities\") pod \"f51e780f-650b-45d8-a2c3-b6b73ce74c61\" (UID: \"f51e780f-650b-45d8-a2c3-b6b73ce74c61\") " Mar 20 15:53:47 crc kubenswrapper[4730]: I0320 15:53:47.015884 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f51e780f-650b-45d8-a2c3-b6b73ce74c61-catalog-content\") pod \"f51e780f-650b-45d8-a2c3-b6b73ce74c61\" (UID: \"f51e780f-650b-45d8-a2c3-b6b73ce74c61\") " Mar 20 15:53:47 crc kubenswrapper[4730]: I0320 15:53:47.016792 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f51e780f-650b-45d8-a2c3-b6b73ce74c61-utilities" (OuterVolumeSpecName: "utilities") pod "f51e780f-650b-45d8-a2c3-b6b73ce74c61" (UID: "f51e780f-650b-45d8-a2c3-b6b73ce74c61"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:53:47 crc kubenswrapper[4730]: I0320 15:53:47.020878 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f51e780f-650b-45d8-a2c3-b6b73ce74c61-kube-api-access-55wwc" (OuterVolumeSpecName: "kube-api-access-55wwc") pod "f51e780f-650b-45d8-a2c3-b6b73ce74c61" (UID: "f51e780f-650b-45d8-a2c3-b6b73ce74c61"). InnerVolumeSpecName "kube-api-access-55wwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:53:47 crc kubenswrapper[4730]: I0320 15:53:47.118047 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55wwc\" (UniqueName: \"kubernetes.io/projected/f51e780f-650b-45d8-a2c3-b6b73ce74c61-kube-api-access-55wwc\") on node \"crc\" DevicePath \"\"" Mar 20 15:53:47 crc kubenswrapper[4730]: I0320 15:53:47.118087 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f51e780f-650b-45d8-a2c3-b6b73ce74c61-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 15:53:47 crc kubenswrapper[4730]: I0320 15:53:47.151583 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f51e780f-650b-45d8-a2c3-b6b73ce74c61-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f51e780f-650b-45d8-a2c3-b6b73ce74c61" (UID: "f51e780f-650b-45d8-a2c3-b6b73ce74c61"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:53:47 crc kubenswrapper[4730]: I0320 15:53:47.219083 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f51e780f-650b-45d8-a2c3-b6b73ce74c61-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 15:53:47 crc kubenswrapper[4730]: I0320 15:53:47.732536 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4mms" event={"ID":"f51e780f-650b-45d8-a2c3-b6b73ce74c61","Type":"ContainerDied","Data":"c07e01328d7c5f71cfb88d67fc57544826757077ff694bb1f7eb716cf5cf5bd7"} Mar 20 15:53:47 crc kubenswrapper[4730]: I0320 15:53:47.732581 4730 scope.go:117] "RemoveContainer" containerID="dab2914bb52dccdd630edde42af76c8a7122f9d5ae1d800cd9af1744c1e01631" Mar 20 15:53:47 crc kubenswrapper[4730]: I0320 15:53:47.732691 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p4mms" Mar 20 15:53:47 crc kubenswrapper[4730]: I0320 15:53:47.751900 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p4mms"] Mar 20 15:53:47 crc kubenswrapper[4730]: I0320 15:53:47.756265 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p4mms"] Mar 20 15:53:47 crc kubenswrapper[4730]: I0320 15:53:47.935577 4730 scope.go:117] "RemoveContainer" containerID="f6fd4d176812b359067e665f23250f950ab5e29084994d678328733032dac222" Mar 20 15:53:47 crc kubenswrapper[4730]: I0320 15:53:47.957589 4730 scope.go:117] "RemoveContainer" containerID="a9b15d45bb34224d6766d66172ba74c36da9471e707ae2a90cea12d0f5f9aca2" Mar 20 15:53:48 crc kubenswrapper[4730]: I0320 15:53:48.745193 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-nfr9k" event={"ID":"3f50a695-6f8b-42e6-aa4f-3dfd888b6afa","Type":"ContainerStarted","Data":"d3f2447926f0361cdc1c87f47a16412046749eea2217a9b2ad72c3059016c665"} Mar 20 15:53:48 crc kubenswrapper[4730]: I0320 15:53:48.747602 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-nq6dd" Mar 20 15:53:48 crc kubenswrapper[4730]: I0320 15:53:48.755201 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-6tdt2" Mar 20 15:53:48 crc kubenswrapper[4730]: I0320 15:53:48.759666 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nnrp6" event={"ID":"663e9228-322c-4d6a-8988-0033d5dd587a","Type":"ContainerStarted","Data":"69338df9a9bd7b8eac24e28e0b4c0e0630a7148968a80bdb15f0fa2a86e816d0"} Mar 20 15:53:48 crc kubenswrapper[4730]: I0320 15:53:48.774942 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-nq6dd" podStartSLOduration=2.63502137 podStartE2EDuration="5.774918157s" podCreationTimestamp="2026-03-20 15:53:43 +0000 UTC" firstStartedPulling="2026-03-20 15:53:44.837167147 +0000 UTC m=+884.050538506" lastFinishedPulling="2026-03-20 15:53:47.977063874 +0000 UTC m=+887.190435293" observedRunningTime="2026-03-20 15:53:48.771960252 +0000 UTC m=+887.985331701" watchObservedRunningTime="2026-03-20 15:53:48.774918157 +0000 UTC m=+887.988289536" Mar 20 15:53:48 crc kubenswrapper[4730]: I0320 15:53:48.838268 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-nnrp6" podStartSLOduration=3.038412779 podStartE2EDuration="5.838230424s" podCreationTimestamp="2026-03-20 15:53:43 +0000 UTC" firstStartedPulling="2026-03-20 15:53:45.178016511 +0000 UTC m=+884.391387880" lastFinishedPulling="2026-03-20 15:53:47.977834116 +0000 UTC m=+887.191205525" observedRunningTime="2026-03-20 15:53:48.832769448 +0000 UTC m=+888.046140817" watchObservedRunningTime="2026-03-20 15:53:48.838230424 +0000 UTC m=+888.051601803" Mar 20 15:53:48 crc kubenswrapper[4730]: I0320 15:53:48.861862 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-6tdt2" podStartSLOduration=2.494707832 podStartE2EDuration="5.861838232s" podCreationTimestamp="2026-03-20 15:53:43 +0000 UTC" firstStartedPulling="2026-03-20 15:53:44.608995057 +0000 UTC m=+883.822366416" lastFinishedPulling="2026-03-20 15:53:47.976125447 +0000 UTC m=+887.189496816" observedRunningTime="2026-03-20 15:53:48.855723236 +0000 UTC m=+888.069094605" watchObservedRunningTime="2026-03-20 15:53:48.861838232 +0000 UTC m=+888.075209611" Mar 20 15:53:49 crc kubenswrapper[4730]: I0320 15:53:49.543802 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f51e780f-650b-45d8-a2c3-b6b73ce74c61" path="/var/lib/kubelet/pods/f51e780f-650b-45d8-a2c3-b6b73ce74c61/volumes" Mar 20 15:53:49 crc kubenswrapper[4730]: I0320 15:53:49.766261 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-6tdt2" event={"ID":"0a4f6fcf-7c76-49cf-8f3c-d83879a650f1","Type":"ContainerStarted","Data":"057549402a23539c57ae68646300d5508432df8b711504be91aba508301a510a"} Mar 20 15:53:49 crc kubenswrapper[4730]: I0320 15:53:49.769631 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-nq6dd" event={"ID":"0f827638-33ac-4f99-920b-6e9b72db7955","Type":"ContainerStarted","Data":"36319714d9d076b45a4becfc78e0b05938bf5b292eb742313e25593d679d31f0"} Mar 20 15:53:51 crc kubenswrapper[4730]: I0320 15:53:51.781998 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-nfr9k" event={"ID":"3f50a695-6f8b-42e6-aa4f-3dfd888b6afa","Type":"ContainerStarted","Data":"859e2cbbf09914effc9ea4141cfae3cc5109235e9a1358e6294af703226839ed"} Mar 20 15:53:51 crc kubenswrapper[4730]: I0320 15:53:51.806535 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-nfr9k" podStartSLOduration=2.369955501 podStartE2EDuration="8.806518104s" podCreationTimestamp="2026-03-20 15:53:43 +0000 UTC" firstStartedPulling="2026-03-20 15:53:44.774774076 +0000 UTC m=+883.988145445" lastFinishedPulling="2026-03-20 15:53:51.211336679 +0000 UTC m=+890.424708048" observedRunningTime="2026-03-20 15:53:51.804928499 +0000 UTC m=+891.018299918" watchObservedRunningTime="2026-03-20 15:53:51.806518104 +0000 UTC m=+891.019889473" Mar 20 15:53:54 crc kubenswrapper[4730]: I0320 15:53:54.348217 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-854c989bdc-94fm2" Mar 20 15:53:54 crc kubenswrapper[4730]: I0320 15:53:54.348348 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-854c989bdc-94fm2" Mar 20 15:53:54 crc kubenswrapper[4730]: I0320 15:53:54.356038 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-854c989bdc-94fm2" Mar 20 15:53:54 crc kubenswrapper[4730]: I0320 15:53:54.585322 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-6tdt2" Mar 20 15:53:54 crc kubenswrapper[4730]: I0320 15:53:54.815963 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-854c989bdc-94fm2" Mar 20 15:53:54 crc kubenswrapper[4730]: I0320 15:53:54.885617 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9kgl8"] Mar 20 15:54:00 crc kubenswrapper[4730]: I0320 15:54:00.146597 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567034-sdvfb"] Mar 20 15:54:00 crc kubenswrapper[4730]: E0320 15:54:00.147523 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51e780f-650b-45d8-a2c3-b6b73ce74c61" containerName="extract-utilities" Mar 20 15:54:00 crc kubenswrapper[4730]: I0320 15:54:00.147545 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51e780f-650b-45d8-a2c3-b6b73ce74c61" containerName="extract-utilities" Mar 20 15:54:00 crc kubenswrapper[4730]: E0320 15:54:00.147563 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51e780f-650b-45d8-a2c3-b6b73ce74c61" containerName="registry-server" Mar 20 15:54:00 crc kubenswrapper[4730]: I0320 15:54:00.147573 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51e780f-650b-45d8-a2c3-b6b73ce74c61" containerName="registry-server" Mar 20 15:54:00 crc kubenswrapper[4730]: E0320 15:54:00.147596 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51e780f-650b-45d8-a2c3-b6b73ce74c61" containerName="extract-content" Mar 20 15:54:00 crc kubenswrapper[4730]: I0320 15:54:00.147607 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51e780f-650b-45d8-a2c3-b6b73ce74c61" containerName="extract-content" Mar 20 15:54:00 crc kubenswrapper[4730]: I0320 15:54:00.147787 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f51e780f-650b-45d8-a2c3-b6b73ce74c61" containerName="registry-server" Mar 20 15:54:00 crc kubenswrapper[4730]: I0320 15:54:00.148446 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567034-sdvfb" Mar 20 15:54:00 crc kubenswrapper[4730]: I0320 15:54:00.150698 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 15:54:00 crc kubenswrapper[4730]: I0320 15:54:00.150834 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 15:54:00 crc kubenswrapper[4730]: I0320 15:54:00.151069 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 15:54:00 crc kubenswrapper[4730]: I0320 15:54:00.152173 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567034-sdvfb"] Mar 20 15:54:00 crc kubenswrapper[4730]: I0320 15:54:00.317620 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j6dw\" (UniqueName: \"kubernetes.io/projected/c84a0097-0ea0-4397-b72b-07e391268b84-kube-api-access-5j6dw\") pod \"auto-csr-approver-29567034-sdvfb\" (UID: \"c84a0097-0ea0-4397-b72b-07e391268b84\") " pod="openshift-infra/auto-csr-approver-29567034-sdvfb" Mar 20 15:54:00 crc kubenswrapper[4730]: I0320 15:54:00.419098 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j6dw\" (UniqueName: \"kubernetes.io/projected/c84a0097-0ea0-4397-b72b-07e391268b84-kube-api-access-5j6dw\") pod \"auto-csr-approver-29567034-sdvfb\" (UID: \"c84a0097-0ea0-4397-b72b-07e391268b84\") " pod="openshift-infra/auto-csr-approver-29567034-sdvfb" Mar 20 15:54:00 crc kubenswrapper[4730]: I0320 15:54:00.448656 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j6dw\" (UniqueName: \"kubernetes.io/projected/c84a0097-0ea0-4397-b72b-07e391268b84-kube-api-access-5j6dw\") pod \"auto-csr-approver-29567034-sdvfb\" (UID: \"c84a0097-0ea0-4397-b72b-07e391268b84\") " pod="openshift-infra/auto-csr-approver-29567034-sdvfb" Mar 20 15:54:00 crc kubenswrapper[4730]: I0320 15:54:00.477277 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567034-sdvfb" Mar 20 15:54:00 crc kubenswrapper[4730]: I0320 15:54:00.716340 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567034-sdvfb"] Mar 20 15:54:00 crc kubenswrapper[4730]: W0320 15:54:00.721197 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc84a0097_0ea0_4397_b72b_07e391268b84.slice/crio-8f459ea850de069c76b8e2871471c56ffe66a9caee006badc42c96a8294bc6de WatchSource:0}: Error finding container 8f459ea850de069c76b8e2871471c56ffe66a9caee006badc42c96a8294bc6de: Status 404 returned error can't find the container with id 8f459ea850de069c76b8e2871471c56ffe66a9caee006badc42c96a8294bc6de Mar 20 15:54:00 crc kubenswrapper[4730]: I0320 15:54:00.857035 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567034-sdvfb" event={"ID":"c84a0097-0ea0-4397-b72b-07e391268b84","Type":"ContainerStarted","Data":"8f459ea850de069c76b8e2871471c56ffe66a9caee006badc42c96a8294bc6de"} Mar 20 15:54:01 crc kubenswrapper[4730]: I0320 15:54:01.864179 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567034-sdvfb" event={"ID":"c84a0097-0ea0-4397-b72b-07e391268b84","Type":"ContainerStarted","Data":"cd338cd8acc0dfd58bb17dd35d4fa074369101fa940bc1e78ceafdde3c9aa8ec"} Mar 20 15:54:01 crc kubenswrapper[4730]: I0320 15:54:01.878297 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29567034-sdvfb" podStartSLOduration=1.029161386 podStartE2EDuration="1.878278472s" podCreationTimestamp="2026-03-20 15:54:00 +0000 UTC" firstStartedPulling="2026-03-20 15:54:00.723690357 +0000 UTC m=+899.937061716" lastFinishedPulling="2026-03-20 15:54:01.572807423 +0000 UTC m=+900.786178802" observedRunningTime="2026-03-20 15:54:01.876419798 +0000 UTC m=+901.089791207" watchObservedRunningTime="2026-03-20 15:54:01.878278472 +0000 UTC m=+901.091649841" Mar 20 15:54:02 crc kubenswrapper[4730]: I0320 15:54:02.876279 4730 generic.go:334] "Generic (PLEG): container finished" podID="c84a0097-0ea0-4397-b72b-07e391268b84" containerID="cd338cd8acc0dfd58bb17dd35d4fa074369101fa940bc1e78ceafdde3c9aa8ec" exitCode=0 Mar 20 15:54:02 crc kubenswrapper[4730]: I0320 15:54:02.876413 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567034-sdvfb" event={"ID":"c84a0097-0ea0-4397-b72b-07e391268b84","Type":"ContainerDied","Data":"cd338cd8acc0dfd58bb17dd35d4fa074369101fa940bc1e78ceafdde3c9aa8ec"} Mar 20 15:54:04 crc kubenswrapper[4730]: I0320 15:54:04.134069 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567034-sdvfb" Mar 20 15:54:04 crc kubenswrapper[4730]: I0320 15:54:04.267111 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j6dw\" (UniqueName: \"kubernetes.io/projected/c84a0097-0ea0-4397-b72b-07e391268b84-kube-api-access-5j6dw\") pod \"c84a0097-0ea0-4397-b72b-07e391268b84\" (UID: \"c84a0097-0ea0-4397-b72b-07e391268b84\") " Mar 20 15:54:04 crc kubenswrapper[4730]: I0320 15:54:04.273492 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c84a0097-0ea0-4397-b72b-07e391268b84-kube-api-access-5j6dw" (OuterVolumeSpecName: "kube-api-access-5j6dw") pod "c84a0097-0ea0-4397-b72b-07e391268b84" (UID: "c84a0097-0ea0-4397-b72b-07e391268b84"). InnerVolumeSpecName "kube-api-access-5j6dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:54:04 crc kubenswrapper[4730]: I0320 15:54:04.369270 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j6dw\" (UniqueName: \"kubernetes.io/projected/c84a0097-0ea0-4397-b72b-07e391268b84-kube-api-access-5j6dw\") on node \"crc\" DevicePath \"\"" Mar 20 15:54:04 crc kubenswrapper[4730]: I0320 15:54:04.559506 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-nq6dd" Mar 20 15:54:04 crc kubenswrapper[4730]: I0320 15:54:04.616978 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567028-s6xcp"] Mar 20 15:54:04 crc kubenswrapper[4730]: I0320 15:54:04.623215 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567028-s6xcp"] Mar 20 15:54:04 crc kubenswrapper[4730]: I0320 15:54:04.889125 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567034-sdvfb" event={"ID":"c84a0097-0ea0-4397-b72b-07e391268b84","Type":"ContainerDied","Data":"8f459ea850de069c76b8e2871471c56ffe66a9caee006badc42c96a8294bc6de"} Mar 20 15:54:04 crc kubenswrapper[4730]: I0320 15:54:04.889169 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f459ea850de069c76b8e2871471c56ffe66a9caee006badc42c96a8294bc6de" Mar 20 15:54:04 crc kubenswrapper[4730]: I0320 15:54:04.889304 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567034-sdvfb" Mar 20 15:54:05 crc kubenswrapper[4730]: I0320 15:54:05.543957 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e56ca246-99ac-4397-a499-62738ac94a39" path="/var/lib/kubelet/pods/e56ca246-99ac-4397-a499-62738ac94a39/volumes" Mar 20 15:54:12 crc kubenswrapper[4730]: I0320 15:54:12.880338 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 15:54:12 crc kubenswrapper[4730]: I0320 15:54:12.880993 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 15:54:16 crc kubenswrapper[4730]: I0320 15:54:16.796832 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck"] Mar 20 15:54:16 crc kubenswrapper[4730]: E0320 15:54:16.797560 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c84a0097-0ea0-4397-b72b-07e391268b84" containerName="oc" Mar 20 15:54:16 crc kubenswrapper[4730]: I0320 15:54:16.797572 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c84a0097-0ea0-4397-b72b-07e391268b84" containerName="oc" Mar 20 15:54:16 crc kubenswrapper[4730]: I0320 15:54:16.797685 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c84a0097-0ea0-4397-b72b-07e391268b84" containerName="oc" Mar 20 15:54:16 crc kubenswrapper[4730]: I0320 15:54:16.798462 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck" Mar 20 15:54:16 crc kubenswrapper[4730]: I0320 15:54:16.800650 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 20 15:54:16 crc kubenswrapper[4730]: I0320 15:54:16.850161 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck"] Mar 20 15:54:16 crc kubenswrapper[4730]: I0320 15:54:16.953990 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aac6cea5-e666-44b1-9507-f57de2361c40-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck\" (UID: \"aac6cea5-e666-44b1-9507-f57de2361c40\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck" Mar 20 15:54:16 crc kubenswrapper[4730]: I0320 15:54:16.954070 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aac6cea5-e666-44b1-9507-f57de2361c40-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck\" (UID: \"aac6cea5-e666-44b1-9507-f57de2361c40\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck" Mar 20 15:54:16 crc kubenswrapper[4730]: I0320 15:54:16.954100 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ntq6\" (UniqueName: \"kubernetes.io/projected/aac6cea5-e666-44b1-9507-f57de2361c40-kube-api-access-8ntq6\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck\" (UID: \"aac6cea5-e666-44b1-9507-f57de2361c40\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck" Mar 20 15:54:17 crc kubenswrapper[4730]: I0320 15:54:17.055849 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aac6cea5-e666-44b1-9507-f57de2361c40-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck\" (UID: \"aac6cea5-e666-44b1-9507-f57de2361c40\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck" Mar 20 15:54:17 crc kubenswrapper[4730]: I0320 15:54:17.056180 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ntq6\" (UniqueName: \"kubernetes.io/projected/aac6cea5-e666-44b1-9507-f57de2361c40-kube-api-access-8ntq6\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck\" (UID: \"aac6cea5-e666-44b1-9507-f57de2361c40\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck" Mar 20 15:54:17 crc kubenswrapper[4730]: I0320 15:54:17.056353 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aac6cea5-e666-44b1-9507-f57de2361c40-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck\" (UID: \"aac6cea5-e666-44b1-9507-f57de2361c40\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck" Mar 20 15:54:17 crc kubenswrapper[4730]: I0320 15:54:17.056443 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aac6cea5-e666-44b1-9507-f57de2361c40-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck\" (UID: \"aac6cea5-e666-44b1-9507-f57de2361c40\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck" Mar 20 15:54:17 crc kubenswrapper[4730]: I0320 15:54:17.056777 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aac6cea5-e666-44b1-9507-f57de2361c40-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck\" (UID: \"aac6cea5-e666-44b1-9507-f57de2361c40\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck" Mar 20 15:54:17 crc kubenswrapper[4730]: I0320 15:54:17.081276 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ntq6\" (UniqueName: \"kubernetes.io/projected/aac6cea5-e666-44b1-9507-f57de2361c40-kube-api-access-8ntq6\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck\" (UID: \"aac6cea5-e666-44b1-9507-f57de2361c40\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck" Mar 20 15:54:17 crc kubenswrapper[4730]: I0320 15:54:17.115129 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck" Mar 20 15:54:17 crc kubenswrapper[4730]: I0320 15:54:17.347571 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck"] Mar 20 15:54:17 crc kubenswrapper[4730]: I0320 15:54:17.987164 4730 generic.go:334] "Generic (PLEG): container finished" podID="aac6cea5-e666-44b1-9507-f57de2361c40" containerID="14b29fd92f84091c54c3a87b6489617a775cf21b2dce3a2e2f91de16fefc572f" exitCode=0 Mar 20 15:54:17 crc kubenswrapper[4730]: I0320 15:54:17.987203 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck" event={"ID":"aac6cea5-e666-44b1-9507-f57de2361c40","Type":"ContainerDied","Data":"14b29fd92f84091c54c3a87b6489617a775cf21b2dce3a2e2f91de16fefc572f"} Mar 20 15:54:17 crc kubenswrapper[4730]: I0320 15:54:17.987227 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck" event={"ID":"aac6cea5-e666-44b1-9507-f57de2361c40","Type":"ContainerStarted","Data":"02bc4b30e192f9e7bce0c5703d33941a824f5102e9a1fc90651580ea747816ca"} Mar 20 15:54:19 crc kubenswrapper[4730]: I0320 15:54:19.942696 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-9kgl8" podUID="5edbd5a9-6c8b-4ef8-950f-58deaecf36ee" containerName="console" containerID="cri-o://dd7ad8497736491ff86002b539ccd73a88dfd723919819049a39b445ea55904f" gracePeriod=15 Mar 20 15:54:20 crc kubenswrapper[4730]: I0320 15:54:19.999953 4730 generic.go:334] "Generic (PLEG): container finished" podID="aac6cea5-e666-44b1-9507-f57de2361c40" containerID="d09021f97f624fe5bcbe4d6319ffb155927de148a56609e6902f09cbfa54760c" exitCode=0 Mar 20 15:54:20 crc kubenswrapper[4730]: I0320 15:54:20.000055 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck" event={"ID":"aac6cea5-e666-44b1-9507-f57de2361c40","Type":"ContainerDied","Data":"d09021f97f624fe5bcbe4d6319ffb155927de148a56609e6902f09cbfa54760c"} Mar 20 15:54:20 crc kubenswrapper[4730]: I0320 15:54:20.362795 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9kgl8_5edbd5a9-6c8b-4ef8-950f-58deaecf36ee/console/0.log" Mar 20 15:54:20 crc kubenswrapper[4730]: I0320 15:54:20.363081 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:54:20 crc kubenswrapper[4730]: I0320 15:54:20.500080 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-console-oauth-config\") pod \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\" (UID: \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\") " Mar 20 15:54:20 crc kubenswrapper[4730]: I0320 15:54:20.500138 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-trusted-ca-bundle\") pod \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\" (UID: \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\") " Mar 20 15:54:20 crc kubenswrapper[4730]: I0320 15:54:20.500155 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-console-config\") pod \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\" (UID: \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\") " Mar 20 15:54:20 crc kubenswrapper[4730]: I0320 15:54:20.500190 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qvx5\" (UniqueName: \"kubernetes.io/projected/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-kube-api-access-4qvx5\") pod \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\" (UID: \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\") " Mar 20 15:54:20 crc kubenswrapper[4730]: I0320 15:54:20.500237 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-console-serving-cert\") pod \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\" (UID: \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\") " Mar 20 15:54:20 crc kubenswrapper[4730]: I0320 15:54:20.500294 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-oauth-serving-cert\") pod \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\" (UID: \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\") " Mar 20 15:54:20 crc kubenswrapper[4730]: I0320 15:54:20.500331 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-service-ca\") pod \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\" (UID: \"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee\") " Mar 20 15:54:20 crc kubenswrapper[4730]: I0320 15:54:20.501042 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-console-config" (OuterVolumeSpecName: "console-config") pod "5edbd5a9-6c8b-4ef8-950f-58deaecf36ee" (UID: "5edbd5a9-6c8b-4ef8-950f-58deaecf36ee"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:54:20 crc kubenswrapper[4730]: I0320 15:54:20.501098 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-service-ca" (OuterVolumeSpecName: "service-ca") pod "5edbd5a9-6c8b-4ef8-950f-58deaecf36ee" (UID: "5edbd5a9-6c8b-4ef8-950f-58deaecf36ee"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:54:20 crc kubenswrapper[4730]: I0320 15:54:20.501219 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "5edbd5a9-6c8b-4ef8-950f-58deaecf36ee" (UID: "5edbd5a9-6c8b-4ef8-950f-58deaecf36ee"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:54:20 crc kubenswrapper[4730]: I0320 15:54:20.501283 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5edbd5a9-6c8b-4ef8-950f-58deaecf36ee" (UID: "5edbd5a9-6c8b-4ef8-950f-58deaecf36ee"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:54:20 crc kubenswrapper[4730]: I0320 15:54:20.505433 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "5edbd5a9-6c8b-4ef8-950f-58deaecf36ee" (UID: "5edbd5a9-6c8b-4ef8-950f-58deaecf36ee"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:54:20 crc kubenswrapper[4730]: I0320 15:54:20.505679 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "5edbd5a9-6c8b-4ef8-950f-58deaecf36ee" (UID: "5edbd5a9-6c8b-4ef8-950f-58deaecf36ee"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:54:20 crc kubenswrapper[4730]: I0320 15:54:20.508597 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-kube-api-access-4qvx5" (OuterVolumeSpecName: "kube-api-access-4qvx5") pod "5edbd5a9-6c8b-4ef8-950f-58deaecf36ee" (UID: "5edbd5a9-6c8b-4ef8-950f-58deaecf36ee"). InnerVolumeSpecName "kube-api-access-4qvx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:54:20 crc kubenswrapper[4730]: I0320 15:54:20.601554 4730 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:54:20 crc kubenswrapper[4730]: I0320 15:54:20.601583 4730 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 15:54:20 crc kubenswrapper[4730]: I0320 15:54:20.601592 4730 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-service-ca\") on node \"crc\" DevicePath \"\"" Mar 20 15:54:20 crc kubenswrapper[4730]: I0320 15:54:20.601602 4730 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:54:20 crc kubenswrapper[4730]: I0320 15:54:20.601609 4730 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-console-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:54:20 crc kubenswrapper[4730]: I0320 15:54:20.601617 4730 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 15:54:20 crc kubenswrapper[4730]: I0320 15:54:20.601625 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qvx5\" (UniqueName: \"kubernetes.io/projected/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee-kube-api-access-4qvx5\") on node \"crc\" DevicePath \"\"" Mar 20 15:54:21 crc kubenswrapper[4730]: I0320 15:54:21.006510 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9kgl8_5edbd5a9-6c8b-4ef8-950f-58deaecf36ee/console/0.log" Mar 20 15:54:21 crc kubenswrapper[4730]: I0320 15:54:21.007286 4730 generic.go:334] "Generic (PLEG): container finished" podID="5edbd5a9-6c8b-4ef8-950f-58deaecf36ee" containerID="dd7ad8497736491ff86002b539ccd73a88dfd723919819049a39b445ea55904f" exitCode=2 Mar 20 15:54:21 crc kubenswrapper[4730]: I0320 15:54:21.007361 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9kgl8" event={"ID":"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee","Type":"ContainerDied","Data":"dd7ad8497736491ff86002b539ccd73a88dfd723919819049a39b445ea55904f"} Mar 20 15:54:21 crc kubenswrapper[4730]: I0320 15:54:21.007518 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9kgl8" event={"ID":"5edbd5a9-6c8b-4ef8-950f-58deaecf36ee","Type":"ContainerDied","Data":"e011dbdf40941c9f2e1edba06bd23dad1736901c7815ace4b7b103d548c5c8d5"} Mar 20 15:54:21 crc kubenswrapper[4730]: I0320 15:54:21.007551 4730 scope.go:117] "RemoveContainer" containerID="dd7ad8497736491ff86002b539ccd73a88dfd723919819049a39b445ea55904f" Mar 20 15:54:21 crc kubenswrapper[4730]: I0320 15:54:21.007373 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9kgl8" Mar 20 15:54:21 crc kubenswrapper[4730]: I0320 15:54:21.011041 4730 generic.go:334] "Generic (PLEG): container finished" podID="aac6cea5-e666-44b1-9507-f57de2361c40" containerID="7774bdd106f067fcc2f20edd2778652f2a8c0bb9018791d243c17e09c7b40daa" exitCode=0 Mar 20 15:54:21 crc kubenswrapper[4730]: I0320 15:54:21.011090 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck" event={"ID":"aac6cea5-e666-44b1-9507-f57de2361c40","Type":"ContainerDied","Data":"7774bdd106f067fcc2f20edd2778652f2a8c0bb9018791d243c17e09c7b40daa"} Mar 20 15:54:21 crc kubenswrapper[4730]: I0320 15:54:21.034337 4730 scope.go:117] "RemoveContainer" containerID="dd7ad8497736491ff86002b539ccd73a88dfd723919819049a39b445ea55904f" Mar 20 15:54:21 crc kubenswrapper[4730]: E0320 15:54:21.034718 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd7ad8497736491ff86002b539ccd73a88dfd723919819049a39b445ea55904f\": container with ID starting with dd7ad8497736491ff86002b539ccd73a88dfd723919819049a39b445ea55904f not found: ID does not exist" containerID="dd7ad8497736491ff86002b539ccd73a88dfd723919819049a39b445ea55904f" Mar 20 15:54:21 crc kubenswrapper[4730]: I0320 15:54:21.034750 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd7ad8497736491ff86002b539ccd73a88dfd723919819049a39b445ea55904f"} err="failed to get container status \"dd7ad8497736491ff86002b539ccd73a88dfd723919819049a39b445ea55904f\": rpc error: code = NotFound desc = could not find container \"dd7ad8497736491ff86002b539ccd73a88dfd723919819049a39b445ea55904f\": container with ID starting with dd7ad8497736491ff86002b539ccd73a88dfd723919819049a39b445ea55904f not found: ID does not exist" Mar 20 15:54:21 crc kubenswrapper[4730]: I0320 15:54:21.050377 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9kgl8"] Mar 20 15:54:21 crc kubenswrapper[4730]: I0320 15:54:21.055446 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-9kgl8"] Mar 20 15:54:21 crc kubenswrapper[4730]: I0320 15:54:21.542220 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5edbd5a9-6c8b-4ef8-950f-58deaecf36ee" path="/var/lib/kubelet/pods/5edbd5a9-6c8b-4ef8-950f-58deaecf36ee/volumes" Mar 20 15:54:22 crc kubenswrapper[4730]: I0320 15:54:22.340631 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck" Mar 20 15:54:22 crc kubenswrapper[4730]: I0320 15:54:22.523018 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aac6cea5-e666-44b1-9507-f57de2361c40-util\") pod \"aac6cea5-e666-44b1-9507-f57de2361c40\" (UID: \"aac6cea5-e666-44b1-9507-f57de2361c40\") " Mar 20 15:54:22 crc kubenswrapper[4730]: I0320 15:54:22.523063 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aac6cea5-e666-44b1-9507-f57de2361c40-bundle\") pod \"aac6cea5-e666-44b1-9507-f57de2361c40\" (UID: \"aac6cea5-e666-44b1-9507-f57de2361c40\") " Mar 20 15:54:22 crc kubenswrapper[4730]: I0320 15:54:22.523138 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ntq6\" (UniqueName: \"kubernetes.io/projected/aac6cea5-e666-44b1-9507-f57de2361c40-kube-api-access-8ntq6\") pod \"aac6cea5-e666-44b1-9507-f57de2361c40\" (UID: \"aac6cea5-e666-44b1-9507-f57de2361c40\") " Mar 20 15:54:22 crc kubenswrapper[4730]: I0320 15:54:22.524072 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aac6cea5-e666-44b1-9507-f57de2361c40-bundle" (OuterVolumeSpecName: "bundle") pod "aac6cea5-e666-44b1-9507-f57de2361c40" (UID: "aac6cea5-e666-44b1-9507-f57de2361c40"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:54:22 crc kubenswrapper[4730]: I0320 15:54:22.527772 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aac6cea5-e666-44b1-9507-f57de2361c40-kube-api-access-8ntq6" (OuterVolumeSpecName: "kube-api-access-8ntq6") pod "aac6cea5-e666-44b1-9507-f57de2361c40" (UID: "aac6cea5-e666-44b1-9507-f57de2361c40"). InnerVolumeSpecName "kube-api-access-8ntq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:54:22 crc kubenswrapper[4730]: I0320 15:54:22.558124 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aac6cea5-e666-44b1-9507-f57de2361c40-util" (OuterVolumeSpecName: "util") pod "aac6cea5-e666-44b1-9507-f57de2361c40" (UID: "aac6cea5-e666-44b1-9507-f57de2361c40"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:54:22 crc kubenswrapper[4730]: I0320 15:54:22.624986 4730 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aac6cea5-e666-44b1-9507-f57de2361c40-util\") on node \"crc\" DevicePath \"\"" Mar 20 15:54:22 crc kubenswrapper[4730]: I0320 15:54:22.625021 4730 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aac6cea5-e666-44b1-9507-f57de2361c40-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 15:54:22 crc kubenswrapper[4730]: I0320 15:54:22.625030 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ntq6\" (UniqueName: \"kubernetes.io/projected/aac6cea5-e666-44b1-9507-f57de2361c40-kube-api-access-8ntq6\") on node \"crc\" DevicePath \"\"" Mar 20 15:54:23 crc kubenswrapper[4730]: I0320 15:54:23.025289 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck" event={"ID":"aac6cea5-e666-44b1-9507-f57de2361c40","Type":"ContainerDied","Data":"02bc4b30e192f9e7bce0c5703d33941a824f5102e9a1fc90651580ea747816ca"} Mar 20 15:54:23 crc kubenswrapper[4730]: I0320 15:54:23.025559 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02bc4b30e192f9e7bce0c5703d33941a824f5102e9a1fc90651580ea747816ca" Mar 20 15:54:23 crc kubenswrapper[4730]: I0320 15:54:23.025327 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck" Mar 20 15:54:28 crc kubenswrapper[4730]: I0320 15:54:28.416515 4730 scope.go:117] "RemoveContainer" containerID="44d76ae85c164cacb0e0982473fd32dc59c0d37d2af3868ef4b22b1a51c8b024" Mar 20 15:54:31 crc kubenswrapper[4730]: I0320 15:54:31.958609 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-85db46595-g556k"] Mar 20 15:54:31 crc kubenswrapper[4730]: E0320 15:54:31.959465 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5edbd5a9-6c8b-4ef8-950f-58deaecf36ee" containerName="console" Mar 20 15:54:31 crc kubenswrapper[4730]: I0320 15:54:31.959482 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="5edbd5a9-6c8b-4ef8-950f-58deaecf36ee" containerName="console" Mar 20 15:54:31 crc kubenswrapper[4730]: E0320 15:54:31.959492 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aac6cea5-e666-44b1-9507-f57de2361c40" containerName="pull" Mar 20 15:54:31 crc kubenswrapper[4730]: I0320 15:54:31.959500 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac6cea5-e666-44b1-9507-f57de2361c40" containerName="pull" Mar 20 15:54:31 crc kubenswrapper[4730]: E0320 15:54:31.959518 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aac6cea5-e666-44b1-9507-f57de2361c40" containerName="extract" Mar 20 15:54:31 crc kubenswrapper[4730]: I0320 15:54:31.959526 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac6cea5-e666-44b1-9507-f57de2361c40" containerName="extract" Mar 20 15:54:31 crc kubenswrapper[4730]: E0320 15:54:31.959545 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aac6cea5-e666-44b1-9507-f57de2361c40" containerName="util" Mar 20 15:54:31 crc kubenswrapper[4730]: I0320 15:54:31.959553 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac6cea5-e666-44b1-9507-f57de2361c40" containerName="util" Mar 20 15:54:31 crc kubenswrapper[4730]: I0320 15:54:31.959678 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="aac6cea5-e666-44b1-9507-f57de2361c40" containerName="extract" Mar 20 15:54:31 crc kubenswrapper[4730]: I0320 15:54:31.959696 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="5edbd5a9-6c8b-4ef8-950f-58deaecf36ee" containerName="console" Mar 20 15:54:31 crc kubenswrapper[4730]: I0320 15:54:31.960202 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-85db46595-g556k" Mar 20 15:54:31 crc kubenswrapper[4730]: I0320 15:54:31.962637 4730 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 20 15:54:31 crc kubenswrapper[4730]: I0320 15:54:31.962847 4730 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 20 15:54:31 crc kubenswrapper[4730]: I0320 15:54:31.964018 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 20 15:54:31 crc kubenswrapper[4730]: I0320 15:54:31.964090 4730 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-x4dwh" Mar 20 15:54:31 crc kubenswrapper[4730]: I0320 15:54:31.965773 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 20 15:54:31 crc kubenswrapper[4730]: I0320 15:54:31.980798 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-85db46595-g556k"] Mar 20 15:54:32 crc kubenswrapper[4730]: I0320 15:54:32.039310 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b41d974a-1e37-48ae-afdc-48c682c73637-apiservice-cert\") pod \"metallb-operator-controller-manager-85db46595-g556k\" (UID: \"b41d974a-1e37-48ae-afdc-48c682c73637\") " pod="metallb-system/metallb-operator-controller-manager-85db46595-g556k" Mar 20 15:54:32 crc kubenswrapper[4730]: I0320 15:54:32.039616 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phhpv\" (UniqueName: \"kubernetes.io/projected/b41d974a-1e37-48ae-afdc-48c682c73637-kube-api-access-phhpv\") pod \"metallb-operator-controller-manager-85db46595-g556k\" (UID: \"b41d974a-1e37-48ae-afdc-48c682c73637\") " pod="metallb-system/metallb-operator-controller-manager-85db46595-g556k" Mar 20 15:54:32 crc kubenswrapper[4730]: I0320 15:54:32.039719 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b41d974a-1e37-48ae-afdc-48c682c73637-webhook-cert\") pod \"metallb-operator-controller-manager-85db46595-g556k\" (UID: \"b41d974a-1e37-48ae-afdc-48c682c73637\") " pod="metallb-system/metallb-operator-controller-manager-85db46595-g556k" Mar 20 15:54:32 crc kubenswrapper[4730]: I0320 15:54:32.141482 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phhpv\" (UniqueName: \"kubernetes.io/projected/b41d974a-1e37-48ae-afdc-48c682c73637-kube-api-access-phhpv\") pod \"metallb-operator-controller-manager-85db46595-g556k\" (UID: \"b41d974a-1e37-48ae-afdc-48c682c73637\") " pod="metallb-system/metallb-operator-controller-manager-85db46595-g556k" Mar 20 15:54:32 crc kubenswrapper[4730]: I0320 15:54:32.142289 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b41d974a-1e37-48ae-afdc-48c682c73637-webhook-cert\") pod \"metallb-operator-controller-manager-85db46595-g556k\" (UID: \"b41d974a-1e37-48ae-afdc-48c682c73637\") " pod="metallb-system/metallb-operator-controller-manager-85db46595-g556k" Mar 20 15:54:32 crc kubenswrapper[4730]: I0320 15:54:32.142564 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b41d974a-1e37-48ae-afdc-48c682c73637-apiservice-cert\") pod \"metallb-operator-controller-manager-85db46595-g556k\" (UID: \"b41d974a-1e37-48ae-afdc-48c682c73637\") " pod="metallb-system/metallb-operator-controller-manager-85db46595-g556k" Mar 20 15:54:32 crc kubenswrapper[4730]: I0320 15:54:32.153464 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b41d974a-1e37-48ae-afdc-48c682c73637-webhook-cert\") pod \"metallb-operator-controller-manager-85db46595-g556k\" (UID: \"b41d974a-1e37-48ae-afdc-48c682c73637\") " pod="metallb-system/metallb-operator-controller-manager-85db46595-g556k" Mar 20 15:54:32 crc kubenswrapper[4730]: I0320 15:54:32.153470 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b41d974a-1e37-48ae-afdc-48c682c73637-apiservice-cert\") pod \"metallb-operator-controller-manager-85db46595-g556k\" (UID: \"b41d974a-1e37-48ae-afdc-48c682c73637\") " pod="metallb-system/metallb-operator-controller-manager-85db46595-g556k" Mar 20 15:54:32 crc kubenswrapper[4730]: I0320 15:54:32.171153 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phhpv\" (UniqueName: \"kubernetes.io/projected/b41d974a-1e37-48ae-afdc-48c682c73637-kube-api-access-phhpv\") pod \"metallb-operator-controller-manager-85db46595-g556k\" (UID: \"b41d974a-1e37-48ae-afdc-48c682c73637\") " pod="metallb-system/metallb-operator-controller-manager-85db46595-g556k" Mar 20 15:54:32 crc kubenswrapper[4730]: I0320 15:54:32.198481 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f9794bdc6-ccfwn"] Mar 20 15:54:32 crc kubenswrapper[4730]: I0320 15:54:32.199401 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5f9794bdc6-ccfwn" Mar 20 15:54:32 crc kubenswrapper[4730]: I0320 15:54:32.201742 4730 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 20 15:54:32 crc kubenswrapper[4730]: I0320 15:54:32.202055 4730 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 20 15:54:32 crc kubenswrapper[4730]: I0320 15:54:32.202176 4730 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-s2pjt" Mar 20 15:54:32 crc kubenswrapper[4730]: I0320 15:54:32.217924 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f9794bdc6-ccfwn"] Mar 20 15:54:32 crc kubenswrapper[4730]: I0320 15:54:32.279591 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-85db46595-g556k" Mar 20 15:54:32 crc kubenswrapper[4730]: I0320 15:54:32.344462 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9dfba7eb-850f-4e34-a875-8ef219c8c783-webhook-cert\") pod \"metallb-operator-webhook-server-5f9794bdc6-ccfwn\" (UID: \"9dfba7eb-850f-4e34-a875-8ef219c8c783\") " pod="metallb-system/metallb-operator-webhook-server-5f9794bdc6-ccfwn" Mar 20 15:54:32 crc kubenswrapper[4730]: I0320 15:54:32.344827 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9dfba7eb-850f-4e34-a875-8ef219c8c783-apiservice-cert\") pod \"metallb-operator-webhook-server-5f9794bdc6-ccfwn\" (UID: \"9dfba7eb-850f-4e34-a875-8ef219c8c783\") " pod="metallb-system/metallb-operator-webhook-server-5f9794bdc6-ccfwn" Mar 20 15:54:32 crc kubenswrapper[4730]: I0320 15:54:32.344864 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9ftl\" (UniqueName: \"kubernetes.io/projected/9dfba7eb-850f-4e34-a875-8ef219c8c783-kube-api-access-f9ftl\") pod \"metallb-operator-webhook-server-5f9794bdc6-ccfwn\" (UID: \"9dfba7eb-850f-4e34-a875-8ef219c8c783\") " pod="metallb-system/metallb-operator-webhook-server-5f9794bdc6-ccfwn" Mar 20 15:54:32 crc kubenswrapper[4730]: I0320 15:54:32.451889 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9ftl\" (UniqueName: \"kubernetes.io/projected/9dfba7eb-850f-4e34-a875-8ef219c8c783-kube-api-access-f9ftl\") pod \"metallb-operator-webhook-server-5f9794bdc6-ccfwn\" (UID: \"9dfba7eb-850f-4e34-a875-8ef219c8c783\") " pod="metallb-system/metallb-operator-webhook-server-5f9794bdc6-ccfwn" Mar 20 15:54:32 crc kubenswrapper[4730]: I0320 15:54:32.451991 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9dfba7eb-850f-4e34-a875-8ef219c8c783-webhook-cert\") pod \"metallb-operator-webhook-server-5f9794bdc6-ccfwn\" (UID: \"9dfba7eb-850f-4e34-a875-8ef219c8c783\") " pod="metallb-system/metallb-operator-webhook-server-5f9794bdc6-ccfwn" Mar 20 15:54:32 crc kubenswrapper[4730]: I0320 15:54:32.452013 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9dfba7eb-850f-4e34-a875-8ef219c8c783-apiservice-cert\") pod \"metallb-operator-webhook-server-5f9794bdc6-ccfwn\" (UID: \"9dfba7eb-850f-4e34-a875-8ef219c8c783\") " pod="metallb-system/metallb-operator-webhook-server-5f9794bdc6-ccfwn" Mar 20 15:54:32 crc kubenswrapper[4730]: I0320 15:54:32.464061 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9dfba7eb-850f-4e34-a875-8ef219c8c783-apiservice-cert\") pod \"metallb-operator-webhook-server-5f9794bdc6-ccfwn\" (UID: \"9dfba7eb-850f-4e34-a875-8ef219c8c783\") " pod="metallb-system/metallb-operator-webhook-server-5f9794bdc6-ccfwn" Mar 20 15:54:32 crc kubenswrapper[4730]: I0320 15:54:32.464206 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9dfba7eb-850f-4e34-a875-8ef219c8c783-webhook-cert\") pod \"metallb-operator-webhook-server-5f9794bdc6-ccfwn\" (UID: \"9dfba7eb-850f-4e34-a875-8ef219c8c783\") " pod="metallb-system/metallb-operator-webhook-server-5f9794bdc6-ccfwn" Mar 20 15:54:32 crc kubenswrapper[4730]: I0320 15:54:32.472943 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9ftl\" (UniqueName: \"kubernetes.io/projected/9dfba7eb-850f-4e34-a875-8ef219c8c783-kube-api-access-f9ftl\") pod \"metallb-operator-webhook-server-5f9794bdc6-ccfwn\" (UID: \"9dfba7eb-850f-4e34-a875-8ef219c8c783\") " pod="metallb-system/metallb-operator-webhook-server-5f9794bdc6-ccfwn" Mar 20 15:54:32 crc kubenswrapper[4730]: I0320 15:54:32.514621 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5f9794bdc6-ccfwn" Mar 20 15:54:32 crc kubenswrapper[4730]: I0320 15:54:32.561536 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-85db46595-g556k"] Mar 20 15:54:32 crc kubenswrapper[4730]: W0320 15:54:32.571180 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb41d974a_1e37_48ae_afdc_48c682c73637.slice/crio-5dbb3b3778d921b9ff6cedf1a3638ca001a5a37b829e140f387e1673142465ed WatchSource:0}: Error finding container 5dbb3b3778d921b9ff6cedf1a3638ca001a5a37b829e140f387e1673142465ed: Status 404 returned error can't find the container with id 5dbb3b3778d921b9ff6cedf1a3638ca001a5a37b829e140f387e1673142465ed Mar 20 15:54:32 crc kubenswrapper[4730]: I0320 15:54:32.913132 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f9794bdc6-ccfwn"] Mar 20 15:54:32 crc kubenswrapper[4730]: W0320 15:54:32.915974 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9dfba7eb_850f_4e34_a875_8ef219c8c783.slice/crio-8d8074f09deca60877d122fc28a31c0de4c928d762f1a39d0e3f320b72eb652d WatchSource:0}: Error finding container 8d8074f09deca60877d122fc28a31c0de4c928d762f1a39d0e3f320b72eb652d: Status 404 returned error can't find the container with id 8d8074f09deca60877d122fc28a31c0de4c928d762f1a39d0e3f320b72eb652d Mar 20 15:54:33 crc kubenswrapper[4730]: I0320 15:54:33.084982 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5f9794bdc6-ccfwn" event={"ID":"9dfba7eb-850f-4e34-a875-8ef219c8c783","Type":"ContainerStarted","Data":"8d8074f09deca60877d122fc28a31c0de4c928d762f1a39d0e3f320b72eb652d"} Mar 20 15:54:33 crc kubenswrapper[4730]: I0320 15:54:33.086231 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-85db46595-g556k" event={"ID":"b41d974a-1e37-48ae-afdc-48c682c73637","Type":"ContainerStarted","Data":"5dbb3b3778d921b9ff6cedf1a3638ca001a5a37b829e140f387e1673142465ed"} Mar 20 15:54:36 crc kubenswrapper[4730]: I0320 15:54:36.106264 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-85db46595-g556k" event={"ID":"b41d974a-1e37-48ae-afdc-48c682c73637","Type":"ContainerStarted","Data":"38328a93498c64211d3b290ad967186496858da8ab053fbb924892938b402136"} Mar 20 15:54:36 crc kubenswrapper[4730]: I0320 15:54:36.106904 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-85db46595-g556k" Mar 20 15:54:36 crc kubenswrapper[4730]: I0320 15:54:36.129434 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-85db46595-g556k" podStartSLOduration=2.116318922 podStartE2EDuration="5.129420201s" podCreationTimestamp="2026-03-20 15:54:31 +0000 UTC" firstStartedPulling="2026-03-20 15:54:32.574573977 +0000 UTC m=+931.787945346" lastFinishedPulling="2026-03-20 15:54:35.587675256 +0000 UTC m=+934.801046625" observedRunningTime="2026-03-20 15:54:36.128396652 +0000 UTC m=+935.341768021" watchObservedRunningTime="2026-03-20 15:54:36.129420201 +0000 UTC m=+935.342791570" Mar 20 15:54:38 crc kubenswrapper[4730]: I0320 15:54:38.122097 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5f9794bdc6-ccfwn" event={"ID":"9dfba7eb-850f-4e34-a875-8ef219c8c783","Type":"ContainerStarted","Data":"0f0ad8601f2bdadc96283021b8decd64790e350286ce991c8a4fb3b19634540d"} Mar 20 15:54:38 crc kubenswrapper[4730]: I0320 15:54:38.130691 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5f9794bdc6-ccfwn" Mar 20 15:54:38 crc kubenswrapper[4730]: I0320 15:54:38.152481 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5f9794bdc6-ccfwn" podStartSLOduration=1.584921015 podStartE2EDuration="6.152464155s" podCreationTimestamp="2026-03-20 15:54:32 +0000 UTC" firstStartedPulling="2026-03-20 15:54:32.918533083 +0000 UTC m=+932.131904452" lastFinishedPulling="2026-03-20 15:54:37.486076223 +0000 UTC m=+936.699447592" observedRunningTime="2026-03-20 15:54:38.146880672 +0000 UTC m=+937.360252051" watchObservedRunningTime="2026-03-20 15:54:38.152464155 +0000 UTC m=+937.365835524" Mar 20 15:54:42 crc kubenswrapper[4730]: I0320 15:54:42.880116 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 15:54:42 crc kubenswrapper[4730]: I0320 15:54:42.880697 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 15:54:42 crc kubenswrapper[4730]: I0320 15:54:42.880740 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 15:54:42 crc kubenswrapper[4730]: I0320 15:54:42.881368 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4969adb306e949f48cbf48ac9e1452830c3458afd1750aa781060e2cc0952393"} pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 15:54:42 crc kubenswrapper[4730]: I0320 15:54:42.881430 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" containerID="cri-o://4969adb306e949f48cbf48ac9e1452830c3458afd1750aa781060e2cc0952393" gracePeriod=600 Mar 20 15:54:43 crc kubenswrapper[4730]: I0320 15:54:43.156055 4730 generic.go:334] "Generic (PLEG): container finished" podID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerID="4969adb306e949f48cbf48ac9e1452830c3458afd1750aa781060e2cc0952393" exitCode=0 Mar 20 15:54:43 crc kubenswrapper[4730]: I0320 15:54:43.156126 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerDied","Data":"4969adb306e949f48cbf48ac9e1452830c3458afd1750aa781060e2cc0952393"} Mar 20 15:54:43 crc kubenswrapper[4730]: I0320 15:54:43.156416 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerStarted","Data":"5a28eadd1ac2eb334876364a020c16296d471cf45645c126a154825ac93c80d5"} Mar 20 15:54:43 crc kubenswrapper[4730]: I0320 15:54:43.156437 4730 scope.go:117] "RemoveContainer" containerID="44f44ed17252feb14ca678b8fd7bddf96639b37f5ddb8303898a1167aa46bf9c" Mar 20 15:54:52 crc kubenswrapper[4730]: I0320 15:54:52.520283 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5f9794bdc6-ccfwn" Mar 20 15:55:12 crc kubenswrapper[4730]: I0320 15:55:12.283116 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-85db46595-g556k" Mar 20 15:55:12 crc kubenswrapper[4730]: I0320 15:55:12.941397 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-pbr4w"] Mar 20 15:55:12 crc kubenswrapper[4730]: I0320 15:55:12.944153 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-pbr4w" Mar 20 15:55:12 crc kubenswrapper[4730]: I0320 15:55:12.947199 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 20 15:55:12 crc kubenswrapper[4730]: I0320 15:55:12.948219 4730 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 20 15:55:12 crc kubenswrapper[4730]: I0320 15:55:12.948235 4730 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-2jvnd" Mar 20 15:55:12 crc kubenswrapper[4730]: I0320 15:55:12.948588 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-vmgrx"] Mar 20 15:55:12 crc kubenswrapper[4730]: I0320 15:55:12.949450 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vmgrx" Mar 20 15:55:12 crc kubenswrapper[4730]: I0320 15:55:12.952515 4730 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 20 15:55:12 crc kubenswrapper[4730]: I0320 15:55:12.959988 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f-frr-conf\") pod \"frr-k8s-pbr4w\" (UID: \"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f\") " pod="metallb-system/frr-k8s-pbr4w" Mar 20 15:55:12 crc kubenswrapper[4730]: I0320 15:55:12.960030 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f-frr-sockets\") pod \"frr-k8s-pbr4w\" (UID: \"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f\") " pod="metallb-system/frr-k8s-pbr4w" Mar 20 15:55:12 crc kubenswrapper[4730]: I0320 15:55:12.960075 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f-reloader\") pod \"frr-k8s-pbr4w\" (UID: \"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f\") " pod="metallb-system/frr-k8s-pbr4w" Mar 20 15:55:12 crc kubenswrapper[4730]: I0320 15:55:12.960099 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f-frr-startup\") pod \"frr-k8s-pbr4w\" (UID: \"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f\") " pod="metallb-system/frr-k8s-pbr4w" Mar 20 15:55:12 crc kubenswrapper[4730]: I0320 15:55:12.960118 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9pbt\" (UniqueName: \"kubernetes.io/projected/5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f-kube-api-access-s9pbt\") pod \"frr-k8s-pbr4w\" (UID: \"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f\") " pod="metallb-system/frr-k8s-pbr4w" Mar 20 15:55:12 crc kubenswrapper[4730]: I0320 15:55:12.960144 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70093cb9-bc43-427d-a8e4-5750058e2580-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-vmgrx\" (UID: \"70093cb9-bc43-427d-a8e4-5750058e2580\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vmgrx" Mar 20 15:55:12 crc kubenswrapper[4730]: I0320 15:55:12.960175 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f-metrics-certs\") pod \"frr-k8s-pbr4w\" (UID: \"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f\") " pod="metallb-system/frr-k8s-pbr4w" Mar 20 15:55:12 crc kubenswrapper[4730]: I0320 15:55:12.960196 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnzdp\" (UniqueName: \"kubernetes.io/projected/70093cb9-bc43-427d-a8e4-5750058e2580-kube-api-access-tnzdp\") pod \"frr-k8s-webhook-server-bcc4b6f68-vmgrx\" (UID: \"70093cb9-bc43-427d-a8e4-5750058e2580\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vmgrx" Mar 20 15:55:12 crc kubenswrapper[4730]: I0320 15:55:12.960214 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f-metrics\") pod \"frr-k8s-pbr4w\" (UID: \"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f\") " pod="metallb-system/frr-k8s-pbr4w" Mar 20 15:55:12 crc kubenswrapper[4730]: I0320 15:55:12.965601 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-vmgrx"] Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.061089 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f-frr-startup\") pod \"frr-k8s-pbr4w\" (UID: \"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f\") " pod="metallb-system/frr-k8s-pbr4w" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.061472 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9pbt\" (UniqueName: \"kubernetes.io/projected/5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f-kube-api-access-s9pbt\") pod \"frr-k8s-pbr4w\" (UID: \"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f\") " pod="metallb-system/frr-k8s-pbr4w" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.061513 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70093cb9-bc43-427d-a8e4-5750058e2580-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-vmgrx\" (UID: \"70093cb9-bc43-427d-a8e4-5750058e2580\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vmgrx" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.061535 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f-metrics-certs\") pod \"frr-k8s-pbr4w\" (UID: \"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f\") " pod="metallb-system/frr-k8s-pbr4w" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.061555 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnzdp\" (UniqueName: \"kubernetes.io/projected/70093cb9-bc43-427d-a8e4-5750058e2580-kube-api-access-tnzdp\") pod \"frr-k8s-webhook-server-bcc4b6f68-vmgrx\" (UID: \"70093cb9-bc43-427d-a8e4-5750058e2580\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vmgrx" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.061582 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f-metrics\") pod \"frr-k8s-pbr4w\" (UID: \"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f\") " pod="metallb-system/frr-k8s-pbr4w" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.061605 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f-frr-conf\") pod \"frr-k8s-pbr4w\" (UID: \"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f\") " pod="metallb-system/frr-k8s-pbr4w" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.062042 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f-metrics\") pod \"frr-k8s-pbr4w\" (UID: \"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f\") " pod="metallb-system/frr-k8s-pbr4w" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.062082 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f-frr-startup\") pod \"frr-k8s-pbr4w\" (UID: \"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f\") " pod="metallb-system/frr-k8s-pbr4w" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.062185 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f-frr-sockets\") pod \"frr-k8s-pbr4w\" (UID: \"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f\") " pod="metallb-system/frr-k8s-pbr4w" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.062283 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f-reloader\") pod \"frr-k8s-pbr4w\" (UID: \"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f\") " pod="metallb-system/frr-k8s-pbr4w" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.062420 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f-frr-sockets\") pod \"frr-k8s-pbr4w\" (UID: \"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f\") " pod="metallb-system/frr-k8s-pbr4w" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.062185 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f-frr-conf\") pod \"frr-k8s-pbr4w\" (UID: \"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f\") " pod="metallb-system/frr-k8s-pbr4w" Mar 20 15:55:13 crc kubenswrapper[4730]: E0320 15:55:13.062502 4730 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Mar 20 15:55:13 crc kubenswrapper[4730]: E0320 15:55:13.062547 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70093cb9-bc43-427d-a8e4-5750058e2580-cert podName:70093cb9-bc43-427d-a8e4-5750058e2580 nodeName:}" failed. No retries permitted until 2026-03-20 15:55:13.562530743 +0000 UTC m=+972.775902112 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70093cb9-bc43-427d-a8e4-5750058e2580-cert") pod "frr-k8s-webhook-server-bcc4b6f68-vmgrx" (UID: "70093cb9-bc43-427d-a8e4-5750058e2580") : secret "frr-k8s-webhook-server-cert" not found Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.062572 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f-reloader\") pod \"frr-k8s-pbr4w\" (UID: \"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f\") " pod="metallb-system/frr-k8s-pbr4w" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.071270 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f-metrics-certs\") pod \"frr-k8s-pbr4w\" (UID: \"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f\") " pod="metallb-system/frr-k8s-pbr4w" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.086217 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnzdp\" (UniqueName: \"kubernetes.io/projected/70093cb9-bc43-427d-a8e4-5750058e2580-kube-api-access-tnzdp\") pod \"frr-k8s-webhook-server-bcc4b6f68-vmgrx\" (UID: \"70093cb9-bc43-427d-a8e4-5750058e2580\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vmgrx" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.086308 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9pbt\" (UniqueName: \"kubernetes.io/projected/5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f-kube-api-access-s9pbt\") pod \"frr-k8s-pbr4w\" (UID: \"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f\") " pod="metallb-system/frr-k8s-pbr4w" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.100219 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-jdxzq"] Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.101416 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-jdxzq" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.105570 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-tbvnw"] Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.109530 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-tbvnw" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.110340 4730 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.113016 4730 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.113047 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.113186 4730 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-zn4jj" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.113529 4730 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.128798 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-jdxzq"] Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.163137 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/02f5e1af-23a0-43ef-89ad-9c5af9e98cfd-metallb-excludel2\") pod \"speaker-tbvnw\" (UID: \"02f5e1af-23a0-43ef-89ad-9c5af9e98cfd\") " pod="metallb-system/speaker-tbvnw" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.163196 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfkvr\" (UniqueName: \"kubernetes.io/projected/02f5e1af-23a0-43ef-89ad-9c5af9e98cfd-kube-api-access-kfkvr\") pod \"speaker-tbvnw\" (UID: \"02f5e1af-23a0-43ef-89ad-9c5af9e98cfd\") " pod="metallb-system/speaker-tbvnw" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.163226 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a42a5cd0-d730-4d48-8082-2491494e90ff-cert\") pod \"controller-7bb4cc7c98-jdxzq\" (UID: \"a42a5cd0-d730-4d48-8082-2491494e90ff\") " pod="metallb-system/controller-7bb4cc7c98-jdxzq" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.163340 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/02f5e1af-23a0-43ef-89ad-9c5af9e98cfd-metrics-certs\") pod \"speaker-tbvnw\" (UID: \"02f5e1af-23a0-43ef-89ad-9c5af9e98cfd\") " pod="metallb-system/speaker-tbvnw" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.163416 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a42a5cd0-d730-4d48-8082-2491494e90ff-metrics-certs\") pod \"controller-7bb4cc7c98-jdxzq\" (UID: \"a42a5cd0-d730-4d48-8082-2491494e90ff\") " pod="metallb-system/controller-7bb4cc7c98-jdxzq" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.163448 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/02f5e1af-23a0-43ef-89ad-9c5af9e98cfd-memberlist\") pod \"speaker-tbvnw\" (UID: \"02f5e1af-23a0-43ef-89ad-9c5af9e98cfd\") " pod="metallb-system/speaker-tbvnw" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.163473 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7tdw\" (UniqueName: \"kubernetes.io/projected/a42a5cd0-d730-4d48-8082-2491494e90ff-kube-api-access-n7tdw\") pod \"controller-7bb4cc7c98-jdxzq\" (UID: \"a42a5cd0-d730-4d48-8082-2491494e90ff\") " pod="metallb-system/controller-7bb4cc7c98-jdxzq" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.264522 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/02f5e1af-23a0-43ef-89ad-9c5af9e98cfd-metallb-excludel2\") pod \"speaker-tbvnw\" (UID: \"02f5e1af-23a0-43ef-89ad-9c5af9e98cfd\") " pod="metallb-system/speaker-tbvnw" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.264598 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfkvr\" (UniqueName: \"kubernetes.io/projected/02f5e1af-23a0-43ef-89ad-9c5af9e98cfd-kube-api-access-kfkvr\") pod \"speaker-tbvnw\" (UID: \"02f5e1af-23a0-43ef-89ad-9c5af9e98cfd\") " pod="metallb-system/speaker-tbvnw" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.264623 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a42a5cd0-d730-4d48-8082-2491494e90ff-cert\") pod \"controller-7bb4cc7c98-jdxzq\" (UID: \"a42a5cd0-d730-4d48-8082-2491494e90ff\") " pod="metallb-system/controller-7bb4cc7c98-jdxzq" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.264702 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/02f5e1af-23a0-43ef-89ad-9c5af9e98cfd-metrics-certs\") pod \"speaker-tbvnw\" (UID: \"02f5e1af-23a0-43ef-89ad-9c5af9e98cfd\") " pod="metallb-system/speaker-tbvnw" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.264757 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a42a5cd0-d730-4d48-8082-2491494e90ff-metrics-certs\") pod \"controller-7bb4cc7c98-jdxzq\" (UID: \"a42a5cd0-d730-4d48-8082-2491494e90ff\") " pod="metallb-system/controller-7bb4cc7c98-jdxzq" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.264783 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/02f5e1af-23a0-43ef-89ad-9c5af9e98cfd-memberlist\") pod \"speaker-tbvnw\" (UID: \"02f5e1af-23a0-43ef-89ad-9c5af9e98cfd\") " pod="metallb-system/speaker-tbvnw" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.264806 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7tdw\" (UniqueName: \"kubernetes.io/projected/a42a5cd0-d730-4d48-8082-2491494e90ff-kube-api-access-n7tdw\") pod \"controller-7bb4cc7c98-jdxzq\" (UID: \"a42a5cd0-d730-4d48-8082-2491494e90ff\") " pod="metallb-system/controller-7bb4cc7c98-jdxzq" Mar 20 15:55:13 crc kubenswrapper[4730]: E0320 15:55:13.265239 4730 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 20 15:55:13 crc kubenswrapper[4730]: E0320 15:55:13.265286 4730 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Mar 20 15:55:13 crc kubenswrapper[4730]: E0320 15:55:13.265320 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02f5e1af-23a0-43ef-89ad-9c5af9e98cfd-memberlist podName:02f5e1af-23a0-43ef-89ad-9c5af9e98cfd nodeName:}" failed. No retries permitted until 2026-03-20 15:55:13.765301039 +0000 UTC m=+972.978672408 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/02f5e1af-23a0-43ef-89ad-9c5af9e98cfd-memberlist") pod "speaker-tbvnw" (UID: "02f5e1af-23a0-43ef-89ad-9c5af9e98cfd") : secret "metallb-memberlist" not found Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.265335 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/02f5e1af-23a0-43ef-89ad-9c5af9e98cfd-metallb-excludel2\") pod \"speaker-tbvnw\" (UID: \"02f5e1af-23a0-43ef-89ad-9c5af9e98cfd\") " pod="metallb-system/speaker-tbvnw" Mar 20 15:55:13 crc kubenswrapper[4730]: E0320 15:55:13.265364 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a42a5cd0-d730-4d48-8082-2491494e90ff-metrics-certs podName:a42a5cd0-d730-4d48-8082-2491494e90ff nodeName:}" failed. No retries permitted until 2026-03-20 15:55:13.76534581 +0000 UTC m=+972.978717179 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a42a5cd0-d730-4d48-8082-2491494e90ff-metrics-certs") pod "controller-7bb4cc7c98-jdxzq" (UID: "a42a5cd0-d730-4d48-8082-2491494e90ff") : secret "controller-certs-secret" not found Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.266822 4730 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.271680 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/02f5e1af-23a0-43ef-89ad-9c5af9e98cfd-metrics-certs\") pod \"speaker-tbvnw\" (UID: \"02f5e1af-23a0-43ef-89ad-9c5af9e98cfd\") " pod="metallb-system/speaker-tbvnw" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.278568 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-pbr4w" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.279171 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a42a5cd0-d730-4d48-8082-2491494e90ff-cert\") pod \"controller-7bb4cc7c98-jdxzq\" (UID: \"a42a5cd0-d730-4d48-8082-2491494e90ff\") " pod="metallb-system/controller-7bb4cc7c98-jdxzq" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.285486 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfkvr\" (UniqueName: \"kubernetes.io/projected/02f5e1af-23a0-43ef-89ad-9c5af9e98cfd-kube-api-access-kfkvr\") pod \"speaker-tbvnw\" (UID: \"02f5e1af-23a0-43ef-89ad-9c5af9e98cfd\") " pod="metallb-system/speaker-tbvnw" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.287068 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7tdw\" (UniqueName: \"kubernetes.io/projected/a42a5cd0-d730-4d48-8082-2491494e90ff-kube-api-access-n7tdw\") pod \"controller-7bb4cc7c98-jdxzq\" (UID: \"a42a5cd0-d730-4d48-8082-2491494e90ff\") " pod="metallb-system/controller-7bb4cc7c98-jdxzq" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.568369 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70093cb9-bc43-427d-a8e4-5750058e2580-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-vmgrx\" (UID: \"70093cb9-bc43-427d-a8e4-5750058e2580\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vmgrx" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.571446 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70093cb9-bc43-427d-a8e4-5750058e2580-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-vmgrx\" (UID: \"70093cb9-bc43-427d-a8e4-5750058e2580\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vmgrx" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.586468 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vmgrx" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.771600 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a42a5cd0-d730-4d48-8082-2491494e90ff-metrics-certs\") pod \"controller-7bb4cc7c98-jdxzq\" (UID: \"a42a5cd0-d730-4d48-8082-2491494e90ff\") " pod="metallb-system/controller-7bb4cc7c98-jdxzq" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.771975 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/02f5e1af-23a0-43ef-89ad-9c5af9e98cfd-memberlist\") pod \"speaker-tbvnw\" (UID: \"02f5e1af-23a0-43ef-89ad-9c5af9e98cfd\") " pod="metallb-system/speaker-tbvnw" Mar 20 15:55:13 crc kubenswrapper[4730]: E0320 15:55:13.772131 4730 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 20 15:55:13 crc kubenswrapper[4730]: E0320 15:55:13.772222 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02f5e1af-23a0-43ef-89ad-9c5af9e98cfd-memberlist podName:02f5e1af-23a0-43ef-89ad-9c5af9e98cfd nodeName:}" failed. No retries permitted until 2026-03-20 15:55:14.772204418 +0000 UTC m=+973.985575787 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/02f5e1af-23a0-43ef-89ad-9c5af9e98cfd-memberlist") pod "speaker-tbvnw" (UID: "02f5e1af-23a0-43ef-89ad-9c5af9e98cfd") : secret "metallb-memberlist" not found Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.777072 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a42a5cd0-d730-4d48-8082-2491494e90ff-metrics-certs\") pod \"controller-7bb4cc7c98-jdxzq\" (UID: \"a42a5cd0-d730-4d48-8082-2491494e90ff\") " pod="metallb-system/controller-7bb4cc7c98-jdxzq" Mar 20 15:55:13 crc kubenswrapper[4730]: I0320 15:55:13.980104 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-vmgrx"] Mar 20 15:55:13 crc kubenswrapper[4730]: W0320 15:55:13.989770 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70093cb9_bc43_427d_a8e4_5750058e2580.slice/crio-61cac70bdf88f8c040c18bd00bb7e5a81b732ea8e9e56b85389a3d342bb7c6a7 WatchSource:0}: Error finding container 61cac70bdf88f8c040c18bd00bb7e5a81b732ea8e9e56b85389a3d342bb7c6a7: Status 404 returned error can't find the container with id 61cac70bdf88f8c040c18bd00bb7e5a81b732ea8e9e56b85389a3d342bb7c6a7 Mar 20 15:55:14 crc kubenswrapper[4730]: I0320 15:55:14.040579 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-jdxzq" Mar 20 15:55:14 crc kubenswrapper[4730]: I0320 15:55:14.330131 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pbr4w" event={"ID":"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f","Type":"ContainerStarted","Data":"a41420ebc0affd4b2a65376f49566e8ae45051aeba6c79c3c846b89081ef4a08"} Mar 20 15:55:14 crc kubenswrapper[4730]: I0320 15:55:14.331409 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vmgrx" event={"ID":"70093cb9-bc43-427d-a8e4-5750058e2580","Type":"ContainerStarted","Data":"61cac70bdf88f8c040c18bd00bb7e5a81b732ea8e9e56b85389a3d342bb7c6a7"} Mar 20 15:55:14 crc kubenswrapper[4730]: I0320 15:55:14.433703 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-jdxzq"] Mar 20 15:55:14 crc kubenswrapper[4730]: I0320 15:55:14.785416 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/02f5e1af-23a0-43ef-89ad-9c5af9e98cfd-memberlist\") pod \"speaker-tbvnw\" (UID: \"02f5e1af-23a0-43ef-89ad-9c5af9e98cfd\") " pod="metallb-system/speaker-tbvnw" Mar 20 15:55:14 crc kubenswrapper[4730]: I0320 15:55:14.792834 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/02f5e1af-23a0-43ef-89ad-9c5af9e98cfd-memberlist\") pod \"speaker-tbvnw\" (UID: \"02f5e1af-23a0-43ef-89ad-9c5af9e98cfd\") " pod="metallb-system/speaker-tbvnw" Mar 20 15:55:14 crc kubenswrapper[4730]: I0320 15:55:14.947844 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-tbvnw" Mar 20 15:55:14 crc kubenswrapper[4730]: W0320 15:55:14.969877 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02f5e1af_23a0_43ef_89ad_9c5af9e98cfd.slice/crio-917db0179f9c74d8205f5c4e198e877300d10d61df025b7083c47e3b07ef1ddc WatchSource:0}: Error finding container 917db0179f9c74d8205f5c4e198e877300d10d61df025b7083c47e3b07ef1ddc: Status 404 returned error can't find the container with id 917db0179f9c74d8205f5c4e198e877300d10d61df025b7083c47e3b07ef1ddc Mar 20 15:55:15 crc kubenswrapper[4730]: I0320 15:55:15.356200 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tbvnw" event={"ID":"02f5e1af-23a0-43ef-89ad-9c5af9e98cfd","Type":"ContainerStarted","Data":"c0c9f70afccc4a389fadabdf265f8983e4aa5217c947cf3302a7e5cf0eda081c"} Mar 20 15:55:15 crc kubenswrapper[4730]: I0320 15:55:15.356287 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tbvnw" event={"ID":"02f5e1af-23a0-43ef-89ad-9c5af9e98cfd","Type":"ContainerStarted","Data":"917db0179f9c74d8205f5c4e198e877300d10d61df025b7083c47e3b07ef1ddc"} Mar 20 15:55:15 crc kubenswrapper[4730]: I0320 15:55:15.358869 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-jdxzq" event={"ID":"a42a5cd0-d730-4d48-8082-2491494e90ff","Type":"ContainerStarted","Data":"28336ed68bedb4159591928ffdfad2081cddd80b2953c0212abad857c301462d"} Mar 20 15:55:15 crc kubenswrapper[4730]: I0320 15:55:15.358901 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-jdxzq" event={"ID":"a42a5cd0-d730-4d48-8082-2491494e90ff","Type":"ContainerStarted","Data":"c88e8ce4fed6c1da66c130a28f982f0606a71a0fd295591ea15f375c11ac5ad2"} Mar 20 15:55:15 crc kubenswrapper[4730]: I0320 15:55:15.358912 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-jdxzq" event={"ID":"a42a5cd0-d730-4d48-8082-2491494e90ff","Type":"ContainerStarted","Data":"a7884ca535db7dc481e5d6e2b8df9e19ac5dcfce14254770c591f3691aaeca72"} Mar 20 15:55:15 crc kubenswrapper[4730]: I0320 15:55:15.359933 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-jdxzq" Mar 20 15:55:15 crc kubenswrapper[4730]: I0320 15:55:15.394423 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-jdxzq" podStartSLOduration=2.394404411 podStartE2EDuration="2.394404411s" podCreationTimestamp="2026-03-20 15:55:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:55:15.393489285 +0000 UTC m=+974.606860694" watchObservedRunningTime="2026-03-20 15:55:15.394404411 +0000 UTC m=+974.607775780" Mar 20 15:55:16 crc kubenswrapper[4730]: I0320 15:55:16.372062 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tbvnw" event={"ID":"02f5e1af-23a0-43ef-89ad-9c5af9e98cfd","Type":"ContainerStarted","Data":"8e6e0c714389e8537c520f55a5ad4a49fa5ada4cfbea8c26e74e7866e3191408"} Mar 20 15:55:16 crc kubenswrapper[4730]: I0320 15:55:16.394171 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-tbvnw" podStartSLOduration=3.394152773 podStartE2EDuration="3.394152773s" podCreationTimestamp="2026-03-20 15:55:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:55:16.391013063 +0000 UTC m=+975.604384432" watchObservedRunningTime="2026-03-20 15:55:16.394152773 +0000 UTC m=+975.607524142" Mar 20 15:55:17 crc kubenswrapper[4730]: I0320 15:55:17.377783 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-tbvnw" Mar 20 15:55:17 crc kubenswrapper[4730]: I0320 15:55:17.685206 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dczcg"] Mar 20 15:55:17 crc kubenswrapper[4730]: I0320 15:55:17.689863 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dczcg" Mar 20 15:55:17 crc kubenswrapper[4730]: I0320 15:55:17.697627 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dczcg"] Mar 20 15:55:17 crc kubenswrapper[4730]: I0320 15:55:17.755419 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzkfz\" (UniqueName: \"kubernetes.io/projected/146c64ad-b085-471a-a540-7faa5c6e969f-kube-api-access-zzkfz\") pod \"redhat-marketplace-dczcg\" (UID: \"146c64ad-b085-471a-a540-7faa5c6e969f\") " pod="openshift-marketplace/redhat-marketplace-dczcg" Mar 20 15:55:17 crc kubenswrapper[4730]: I0320 15:55:17.755466 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/146c64ad-b085-471a-a540-7faa5c6e969f-catalog-content\") pod \"redhat-marketplace-dczcg\" (UID: \"146c64ad-b085-471a-a540-7faa5c6e969f\") " pod="openshift-marketplace/redhat-marketplace-dczcg" Mar 20 15:55:17 crc kubenswrapper[4730]: I0320 15:55:17.755532 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/146c64ad-b085-471a-a540-7faa5c6e969f-utilities\") pod \"redhat-marketplace-dczcg\" (UID: \"146c64ad-b085-471a-a540-7faa5c6e969f\") " pod="openshift-marketplace/redhat-marketplace-dczcg" Mar 20 15:55:17 crc kubenswrapper[4730]: I0320 15:55:17.857266 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/146c64ad-b085-471a-a540-7faa5c6e969f-catalog-content\") pod \"redhat-marketplace-dczcg\" (UID: \"146c64ad-b085-471a-a540-7faa5c6e969f\") " pod="openshift-marketplace/redhat-marketplace-dczcg" Mar 20 15:55:17 crc kubenswrapper[4730]: I0320 15:55:17.857656 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/146c64ad-b085-471a-a540-7faa5c6e969f-utilities\") pod \"redhat-marketplace-dczcg\" (UID: \"146c64ad-b085-471a-a540-7faa5c6e969f\") " pod="openshift-marketplace/redhat-marketplace-dczcg" Mar 20 15:55:17 crc kubenswrapper[4730]: I0320 15:55:17.857713 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzkfz\" (UniqueName: \"kubernetes.io/projected/146c64ad-b085-471a-a540-7faa5c6e969f-kube-api-access-zzkfz\") pod \"redhat-marketplace-dczcg\" (UID: \"146c64ad-b085-471a-a540-7faa5c6e969f\") " pod="openshift-marketplace/redhat-marketplace-dczcg" Mar 20 15:55:17 crc kubenswrapper[4730]: I0320 15:55:17.858153 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/146c64ad-b085-471a-a540-7faa5c6e969f-catalog-content\") pod \"redhat-marketplace-dczcg\" (UID: \"146c64ad-b085-471a-a540-7faa5c6e969f\") " pod="openshift-marketplace/redhat-marketplace-dczcg" Mar 20 15:55:17 crc kubenswrapper[4730]: I0320 15:55:17.858667 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/146c64ad-b085-471a-a540-7faa5c6e969f-utilities\") pod \"redhat-marketplace-dczcg\" (UID: \"146c64ad-b085-471a-a540-7faa5c6e969f\") " pod="openshift-marketplace/redhat-marketplace-dczcg" Mar 20 15:55:17 crc kubenswrapper[4730]: I0320 15:55:17.892628 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzkfz\" (UniqueName: \"kubernetes.io/projected/146c64ad-b085-471a-a540-7faa5c6e969f-kube-api-access-zzkfz\") pod \"redhat-marketplace-dczcg\" (UID: \"146c64ad-b085-471a-a540-7faa5c6e969f\") " pod="openshift-marketplace/redhat-marketplace-dczcg" Mar 20 15:55:18 crc kubenswrapper[4730]: I0320 15:55:18.008164 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dczcg" Mar 20 15:55:18 crc kubenswrapper[4730]: I0320 15:55:18.456282 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dczcg"] Mar 20 15:55:18 crc kubenswrapper[4730]: W0320 15:55:18.465571 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod146c64ad_b085_471a_a540_7faa5c6e969f.slice/crio-aa1e2ed4596b23c9bada0e041e5fe2d2c27f666f02d8278cf4e320c407e509a5 WatchSource:0}: Error finding container aa1e2ed4596b23c9bada0e041e5fe2d2c27f666f02d8278cf4e320c407e509a5: Status 404 returned error can't find the container with id aa1e2ed4596b23c9bada0e041e5fe2d2c27f666f02d8278cf4e320c407e509a5 Mar 20 15:55:19 crc kubenswrapper[4730]: I0320 15:55:19.404098 4730 generic.go:334] "Generic (PLEG): container finished" podID="146c64ad-b085-471a-a540-7faa5c6e969f" containerID="887bc31eff014e32cf26c264c448be6c71dca22cdbc492ad535c3590b7f3e8da" exitCode=0 Mar 20 15:55:19 crc kubenswrapper[4730]: I0320 15:55:19.404210 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dczcg" event={"ID":"146c64ad-b085-471a-a540-7faa5c6e969f","Type":"ContainerDied","Data":"887bc31eff014e32cf26c264c448be6c71dca22cdbc492ad535c3590b7f3e8da"} Mar 20 15:55:19 crc kubenswrapper[4730]: I0320 15:55:19.404431 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dczcg" event={"ID":"146c64ad-b085-471a-a540-7faa5c6e969f","Type":"ContainerStarted","Data":"aa1e2ed4596b23c9bada0e041e5fe2d2c27f666f02d8278cf4e320c407e509a5"} Mar 20 15:55:22 crc kubenswrapper[4730]: I0320 15:55:22.427537 4730 generic.go:334] "Generic (PLEG): container finished" podID="146c64ad-b085-471a-a540-7faa5c6e969f" containerID="83d094f34ae090c9eeccb57b8f05b10f573a3244c65881d549b6ba6a62fc367e" exitCode=0 Mar 20 15:55:22 crc kubenswrapper[4730]: I0320 15:55:22.427705 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dczcg" event={"ID":"146c64ad-b085-471a-a540-7faa5c6e969f","Type":"ContainerDied","Data":"83d094f34ae090c9eeccb57b8f05b10f573a3244c65881d549b6ba6a62fc367e"} Mar 20 15:55:22 crc kubenswrapper[4730]: I0320 15:55:22.429486 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vmgrx" event={"ID":"70093cb9-bc43-427d-a8e4-5750058e2580","Type":"ContainerStarted","Data":"a8cf3edb8ebd289046edc5c4bdd1333705cc765b9f08dac3ce17d249ec624915"} Mar 20 15:55:22 crc kubenswrapper[4730]: I0320 15:55:22.430055 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vmgrx" Mar 20 15:55:22 crc kubenswrapper[4730]: I0320 15:55:22.432385 4730 generic.go:334] "Generic (PLEG): container finished" podID="5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f" containerID="98a3547c9abda663872929df276239d41f451090d85f020b6dfac556444da1b5" exitCode=0 Mar 20 15:55:22 crc kubenswrapper[4730]: I0320 15:55:22.432413 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pbr4w" event={"ID":"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f","Type":"ContainerDied","Data":"98a3547c9abda663872929df276239d41f451090d85f020b6dfac556444da1b5"} Mar 20 15:55:22 crc kubenswrapper[4730]: I0320 15:55:22.495713 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vmgrx" podStartSLOduration=3.138963081 podStartE2EDuration="10.495695168s" podCreationTimestamp="2026-03-20 15:55:12 +0000 UTC" firstStartedPulling="2026-03-20 15:55:13.991666191 +0000 UTC m=+973.205037550" lastFinishedPulling="2026-03-20 15:55:21.348398268 +0000 UTC m=+980.561769637" observedRunningTime="2026-03-20 15:55:22.492800846 +0000 UTC m=+981.706172215" watchObservedRunningTime="2026-03-20 15:55:22.495695168 +0000 UTC m=+981.709066527" Mar 20 15:55:23 crc kubenswrapper[4730]: I0320 15:55:23.440020 4730 generic.go:334] "Generic (PLEG): container finished" podID="5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f" containerID="60c71f27837972075d15c3f2fb8934da891bfce388025185cf314ba79c175488" exitCode=0 Mar 20 15:55:23 crc kubenswrapper[4730]: I0320 15:55:23.440108 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pbr4w" event={"ID":"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f","Type":"ContainerDied","Data":"60c71f27837972075d15c3f2fb8934da891bfce388025185cf314ba79c175488"} Mar 20 15:55:23 crc kubenswrapper[4730]: I0320 15:55:23.444595 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dczcg" event={"ID":"146c64ad-b085-471a-a540-7faa5c6e969f","Type":"ContainerStarted","Data":"19eabcfbe8c8ecc95b9e7dabd08da3c0a83dd15b9e6ba2626654ee2384904d17"} Mar 20 15:55:23 crc kubenswrapper[4730]: I0320 15:55:23.488101 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dczcg" podStartSLOduration=4.880344984 podStartE2EDuration="6.48808524s" podCreationTimestamp="2026-03-20 15:55:17 +0000 UTC" firstStartedPulling="2026-03-20 15:55:21.273217343 +0000 UTC m=+980.486588722" lastFinishedPulling="2026-03-20 15:55:22.880957599 +0000 UTC m=+982.094328978" observedRunningTime="2026-03-20 15:55:23.484713774 +0000 UTC m=+982.698085153" watchObservedRunningTime="2026-03-20 15:55:23.48808524 +0000 UTC m=+982.701456599" Mar 20 15:55:24 crc kubenswrapper[4730]: I0320 15:55:24.050636 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-jdxzq" Mar 20 15:55:24 crc kubenswrapper[4730]: I0320 15:55:24.450969 4730 generic.go:334] "Generic (PLEG): container finished" podID="5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f" containerID="8dff8d10a59de330a94e20e32ff057bcc85bf5fc56e98ab20b2537a99bfb05be" exitCode=0 Mar 20 15:55:24 crc kubenswrapper[4730]: I0320 15:55:24.451049 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pbr4w" event={"ID":"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f","Type":"ContainerDied","Data":"8dff8d10a59de330a94e20e32ff057bcc85bf5fc56e98ab20b2537a99bfb05be"} Mar 20 15:55:25 crc kubenswrapper[4730]: I0320 15:55:25.462132 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pbr4w" event={"ID":"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f","Type":"ContainerStarted","Data":"fd5bab4984293792a6250d51f252a53fa1ce0338e3285efabcace1946ca454a4"} Mar 20 15:55:25 crc kubenswrapper[4730]: I0320 15:55:25.462527 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-pbr4w" Mar 20 15:55:25 crc kubenswrapper[4730]: I0320 15:55:25.462541 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pbr4w" event={"ID":"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f","Type":"ContainerStarted","Data":"01e81d52130169fe0ae15d0067de3a90f219f6143074655cb14ff2d07033cf9a"} Mar 20 15:55:25 crc kubenswrapper[4730]: I0320 15:55:25.462553 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pbr4w" event={"ID":"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f","Type":"ContainerStarted","Data":"876622305eb46edc2eb833d62f52ed83f5ac5ac4685cf4bd0284a002ae0d0a91"} Mar 20 15:55:25 crc kubenswrapper[4730]: I0320 15:55:25.462587 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pbr4w" event={"ID":"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f","Type":"ContainerStarted","Data":"714771251e434b57c6f751228bf4f567739a60ee3f80dc96d5cd6dbf2d848f76"} Mar 20 15:55:25 crc kubenswrapper[4730]: I0320 15:55:25.462597 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pbr4w" event={"ID":"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f","Type":"ContainerStarted","Data":"4ec8ba38fb2e568025562a8690bbb9b1b92aec199792851980dba74f4875592a"} Mar 20 15:55:25 crc kubenswrapper[4730]: I0320 15:55:25.462606 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pbr4w" event={"ID":"5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f","Type":"ContainerStarted","Data":"0d8b6c8b9f596c16cbd5269ba63455b8069330b3a5b27540a07a26f562be3cc8"} Mar 20 15:55:25 crc kubenswrapper[4730]: I0320 15:55:25.491950 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-pbr4w" podStartSLOduration=5.560453518 podStartE2EDuration="13.491929215s" podCreationTimestamp="2026-03-20 15:55:12 +0000 UTC" firstStartedPulling="2026-03-20 15:55:13.429340425 +0000 UTC m=+972.642711794" lastFinishedPulling="2026-03-20 15:55:21.360816122 +0000 UTC m=+980.574187491" observedRunningTime="2026-03-20 15:55:25.488525568 +0000 UTC m=+984.701896967" watchObservedRunningTime="2026-03-20 15:55:25.491929215 +0000 UTC m=+984.705300584" Mar 20 15:55:28 crc kubenswrapper[4730]: I0320 15:55:28.009301 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dczcg" Mar 20 15:55:28 crc kubenswrapper[4730]: I0320 15:55:28.009617 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dczcg" Mar 20 15:55:28 crc kubenswrapper[4730]: I0320 15:55:28.048391 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dczcg" Mar 20 15:55:28 crc kubenswrapper[4730]: I0320 15:55:28.279295 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-pbr4w" Mar 20 15:55:28 crc kubenswrapper[4730]: I0320 15:55:28.318555 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-pbr4w" Mar 20 15:55:28 crc kubenswrapper[4730]: I0320 15:55:28.527774 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dczcg" Mar 20 15:55:28 crc kubenswrapper[4730]: I0320 15:55:28.568714 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dczcg"] Mar 20 15:55:30 crc kubenswrapper[4730]: I0320 15:55:30.491444 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dczcg" podUID="146c64ad-b085-471a-a540-7faa5c6e969f" containerName="registry-server" containerID="cri-o://19eabcfbe8c8ecc95b9e7dabd08da3c0a83dd15b9e6ba2626654ee2384904d17" gracePeriod=2 Mar 20 15:55:30 crc kubenswrapper[4730]: I0320 15:55:30.726725 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ccfvp"] Mar 20 15:55:30 crc kubenswrapper[4730]: I0320 15:55:30.727961 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ccfvp" Mar 20 15:55:30 crc kubenswrapper[4730]: I0320 15:55:30.750331 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ccfvp"] Mar 20 15:55:30 crc kubenswrapper[4730]: I0320 15:55:30.857514 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67135f13-b182-4e78-b64d-59e924cc6d06-utilities\") pod \"community-operators-ccfvp\" (UID: \"67135f13-b182-4e78-b64d-59e924cc6d06\") " pod="openshift-marketplace/community-operators-ccfvp" Mar 20 15:55:30 crc kubenswrapper[4730]: I0320 15:55:30.857561 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67135f13-b182-4e78-b64d-59e924cc6d06-catalog-content\") pod \"community-operators-ccfvp\" (UID: \"67135f13-b182-4e78-b64d-59e924cc6d06\") " pod="openshift-marketplace/community-operators-ccfvp" Mar 20 15:55:30 crc kubenswrapper[4730]: I0320 15:55:30.857606 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzjls\" (UniqueName: \"kubernetes.io/projected/67135f13-b182-4e78-b64d-59e924cc6d06-kube-api-access-kzjls\") pod \"community-operators-ccfvp\" (UID: \"67135f13-b182-4e78-b64d-59e924cc6d06\") " pod="openshift-marketplace/community-operators-ccfvp" Mar 20 15:55:30 crc kubenswrapper[4730]: I0320 15:55:30.862557 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dczcg" Mar 20 15:55:30 crc kubenswrapper[4730]: I0320 15:55:30.958557 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/146c64ad-b085-471a-a540-7faa5c6e969f-catalog-content\") pod \"146c64ad-b085-471a-a540-7faa5c6e969f\" (UID: \"146c64ad-b085-471a-a540-7faa5c6e969f\") " Mar 20 15:55:30 crc kubenswrapper[4730]: I0320 15:55:30.958596 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzkfz\" (UniqueName: \"kubernetes.io/projected/146c64ad-b085-471a-a540-7faa5c6e969f-kube-api-access-zzkfz\") pod \"146c64ad-b085-471a-a540-7faa5c6e969f\" (UID: \"146c64ad-b085-471a-a540-7faa5c6e969f\") " Mar 20 15:55:30 crc kubenswrapper[4730]: I0320 15:55:30.958728 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/146c64ad-b085-471a-a540-7faa5c6e969f-utilities\") pod \"146c64ad-b085-471a-a540-7faa5c6e969f\" (UID: \"146c64ad-b085-471a-a540-7faa5c6e969f\") " Mar 20 15:55:30 crc kubenswrapper[4730]: I0320 15:55:30.958890 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67135f13-b182-4e78-b64d-59e924cc6d06-utilities\") pod \"community-operators-ccfvp\" (UID: \"67135f13-b182-4e78-b64d-59e924cc6d06\") " pod="openshift-marketplace/community-operators-ccfvp" Mar 20 15:55:30 crc kubenswrapper[4730]: I0320 15:55:30.958923 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67135f13-b182-4e78-b64d-59e924cc6d06-catalog-content\") pod \"community-operators-ccfvp\" (UID: \"67135f13-b182-4e78-b64d-59e924cc6d06\") " pod="openshift-marketplace/community-operators-ccfvp" Mar 20 15:55:30 crc kubenswrapper[4730]: I0320 15:55:30.958966 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzjls\" (UniqueName: \"kubernetes.io/projected/67135f13-b182-4e78-b64d-59e924cc6d06-kube-api-access-kzjls\") pod \"community-operators-ccfvp\" (UID: \"67135f13-b182-4e78-b64d-59e924cc6d06\") " pod="openshift-marketplace/community-operators-ccfvp" Mar 20 15:55:30 crc kubenswrapper[4730]: I0320 15:55:30.959529 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67135f13-b182-4e78-b64d-59e924cc6d06-catalog-content\") pod \"community-operators-ccfvp\" (UID: \"67135f13-b182-4e78-b64d-59e924cc6d06\") " pod="openshift-marketplace/community-operators-ccfvp" Mar 20 15:55:30 crc kubenswrapper[4730]: I0320 15:55:30.959720 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/146c64ad-b085-471a-a540-7faa5c6e969f-utilities" (OuterVolumeSpecName: "utilities") pod "146c64ad-b085-471a-a540-7faa5c6e969f" (UID: "146c64ad-b085-471a-a540-7faa5c6e969f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:55:30 crc kubenswrapper[4730]: I0320 15:55:30.959920 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67135f13-b182-4e78-b64d-59e924cc6d06-utilities\") pod \"community-operators-ccfvp\" (UID: \"67135f13-b182-4e78-b64d-59e924cc6d06\") " pod="openshift-marketplace/community-operators-ccfvp" Mar 20 15:55:30 crc kubenswrapper[4730]: I0320 15:55:30.965469 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/146c64ad-b085-471a-a540-7faa5c6e969f-kube-api-access-zzkfz" (OuterVolumeSpecName: "kube-api-access-zzkfz") pod "146c64ad-b085-471a-a540-7faa5c6e969f" (UID: "146c64ad-b085-471a-a540-7faa5c6e969f"). InnerVolumeSpecName "kube-api-access-zzkfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:55:30 crc kubenswrapper[4730]: I0320 15:55:30.978640 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzjls\" (UniqueName: \"kubernetes.io/projected/67135f13-b182-4e78-b64d-59e924cc6d06-kube-api-access-kzjls\") pod \"community-operators-ccfvp\" (UID: \"67135f13-b182-4e78-b64d-59e924cc6d06\") " pod="openshift-marketplace/community-operators-ccfvp" Mar 20 15:55:30 crc kubenswrapper[4730]: I0320 15:55:30.988652 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/146c64ad-b085-471a-a540-7faa5c6e969f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "146c64ad-b085-471a-a540-7faa5c6e969f" (UID: "146c64ad-b085-471a-a540-7faa5c6e969f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:55:31 crc kubenswrapper[4730]: I0320 15:55:31.056572 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ccfvp" Mar 20 15:55:31 crc kubenswrapper[4730]: I0320 15:55:31.060937 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/146c64ad-b085-471a-a540-7faa5c6e969f-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 15:55:31 crc kubenswrapper[4730]: I0320 15:55:31.061079 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/146c64ad-b085-471a-a540-7faa5c6e969f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 15:55:31 crc kubenswrapper[4730]: I0320 15:55:31.061166 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzkfz\" (UniqueName: \"kubernetes.io/projected/146c64ad-b085-471a-a540-7faa5c6e969f-kube-api-access-zzkfz\") on node \"crc\" DevicePath \"\"" Mar 20 15:55:31 crc kubenswrapper[4730]: I0320 15:55:31.498952 4730 generic.go:334] "Generic (PLEG): container finished" podID="146c64ad-b085-471a-a540-7faa5c6e969f" containerID="19eabcfbe8c8ecc95b9e7dabd08da3c0a83dd15b9e6ba2626654ee2384904d17" exitCode=0 Mar 20 15:55:31 crc kubenswrapper[4730]: I0320 15:55:31.498993 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dczcg" event={"ID":"146c64ad-b085-471a-a540-7faa5c6e969f","Type":"ContainerDied","Data":"19eabcfbe8c8ecc95b9e7dabd08da3c0a83dd15b9e6ba2626654ee2384904d17"} Mar 20 15:55:31 crc kubenswrapper[4730]: I0320 15:55:31.499020 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dczcg" event={"ID":"146c64ad-b085-471a-a540-7faa5c6e969f","Type":"ContainerDied","Data":"aa1e2ed4596b23c9bada0e041e5fe2d2c27f666f02d8278cf4e320c407e509a5"} Mar 20 15:55:31 crc kubenswrapper[4730]: I0320 15:55:31.499038 4730 scope.go:117] "RemoveContainer" containerID="19eabcfbe8c8ecc95b9e7dabd08da3c0a83dd15b9e6ba2626654ee2384904d17" Mar 20 15:55:31 crc kubenswrapper[4730]: I0320 15:55:31.499091 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dczcg" Mar 20 15:55:31 crc kubenswrapper[4730]: I0320 15:55:31.515033 4730 scope.go:117] "RemoveContainer" containerID="83d094f34ae090c9eeccb57b8f05b10f573a3244c65881d549b6ba6a62fc367e" Mar 20 15:55:31 crc kubenswrapper[4730]: I0320 15:55:31.531305 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dczcg"] Mar 20 15:55:31 crc kubenswrapper[4730]: I0320 15:55:31.536826 4730 scope.go:117] "RemoveContainer" containerID="887bc31eff014e32cf26c264c448be6c71dca22cdbc492ad535c3590b7f3e8da" Mar 20 15:55:31 crc kubenswrapper[4730]: I0320 15:55:31.544932 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dczcg"] Mar 20 15:55:31 crc kubenswrapper[4730]: I0320 15:55:31.553471 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ccfvp"] Mar 20 15:55:31 crc kubenswrapper[4730]: I0320 15:55:31.563781 4730 scope.go:117] "RemoveContainer" containerID="19eabcfbe8c8ecc95b9e7dabd08da3c0a83dd15b9e6ba2626654ee2384904d17" Mar 20 15:55:31 crc kubenswrapper[4730]: E0320 15:55:31.564663 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19eabcfbe8c8ecc95b9e7dabd08da3c0a83dd15b9e6ba2626654ee2384904d17\": container with ID starting with 19eabcfbe8c8ecc95b9e7dabd08da3c0a83dd15b9e6ba2626654ee2384904d17 not found: ID does not exist" containerID="19eabcfbe8c8ecc95b9e7dabd08da3c0a83dd15b9e6ba2626654ee2384904d17" Mar 20 15:55:31 crc kubenswrapper[4730]: I0320 15:55:31.564697 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19eabcfbe8c8ecc95b9e7dabd08da3c0a83dd15b9e6ba2626654ee2384904d17"} err="failed to get container status \"19eabcfbe8c8ecc95b9e7dabd08da3c0a83dd15b9e6ba2626654ee2384904d17\": rpc error: code = NotFound desc = could not find container \"19eabcfbe8c8ecc95b9e7dabd08da3c0a83dd15b9e6ba2626654ee2384904d17\": container with ID starting with 19eabcfbe8c8ecc95b9e7dabd08da3c0a83dd15b9e6ba2626654ee2384904d17 not found: ID does not exist" Mar 20 15:55:31 crc kubenswrapper[4730]: I0320 15:55:31.564721 4730 scope.go:117] "RemoveContainer" containerID="83d094f34ae090c9eeccb57b8f05b10f573a3244c65881d549b6ba6a62fc367e" Mar 20 15:55:31 crc kubenswrapper[4730]: E0320 15:55:31.565369 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83d094f34ae090c9eeccb57b8f05b10f573a3244c65881d549b6ba6a62fc367e\": container with ID starting with 83d094f34ae090c9eeccb57b8f05b10f573a3244c65881d549b6ba6a62fc367e not found: ID does not exist" containerID="83d094f34ae090c9eeccb57b8f05b10f573a3244c65881d549b6ba6a62fc367e" Mar 20 15:55:31 crc kubenswrapper[4730]: I0320 15:55:31.565437 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83d094f34ae090c9eeccb57b8f05b10f573a3244c65881d549b6ba6a62fc367e"} err="failed to get container status \"83d094f34ae090c9eeccb57b8f05b10f573a3244c65881d549b6ba6a62fc367e\": rpc error: code = NotFound desc = could not find container \"83d094f34ae090c9eeccb57b8f05b10f573a3244c65881d549b6ba6a62fc367e\": container with ID starting with 83d094f34ae090c9eeccb57b8f05b10f573a3244c65881d549b6ba6a62fc367e not found: ID does not exist" Mar 20 15:55:31 crc kubenswrapper[4730]: I0320 15:55:31.565464 4730 scope.go:117] "RemoveContainer" containerID="887bc31eff014e32cf26c264c448be6c71dca22cdbc492ad535c3590b7f3e8da" Mar 20 15:55:31 crc kubenswrapper[4730]: E0320 15:55:31.565906 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"887bc31eff014e32cf26c264c448be6c71dca22cdbc492ad535c3590b7f3e8da\": container with ID starting with 887bc31eff014e32cf26c264c448be6c71dca22cdbc492ad535c3590b7f3e8da not found: ID does not exist" containerID="887bc31eff014e32cf26c264c448be6c71dca22cdbc492ad535c3590b7f3e8da" Mar 20 15:55:31 crc kubenswrapper[4730]: I0320 15:55:31.565940 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"887bc31eff014e32cf26c264c448be6c71dca22cdbc492ad535c3590b7f3e8da"} err="failed to get container status \"887bc31eff014e32cf26c264c448be6c71dca22cdbc492ad535c3590b7f3e8da\": rpc error: code = NotFound desc = could not find container \"887bc31eff014e32cf26c264c448be6c71dca22cdbc492ad535c3590b7f3e8da\": container with ID starting with 887bc31eff014e32cf26c264c448be6c71dca22cdbc492ad535c3590b7f3e8da not found: ID does not exist" Mar 20 15:55:31 crc kubenswrapper[4730]: W0320 15:55:31.567616 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67135f13_b182_4e78_b64d_59e924cc6d06.slice/crio-658a195d5ec687dd420baf8249b97d002c1f423d4390763b728a9e13cc7604b3 WatchSource:0}: Error finding container 658a195d5ec687dd420baf8249b97d002c1f423d4390763b728a9e13cc7604b3: Status 404 returned error can't find the container with id 658a195d5ec687dd420baf8249b97d002c1f423d4390763b728a9e13cc7604b3 Mar 20 15:55:32 crc kubenswrapper[4730]: I0320 15:55:32.508328 4730 generic.go:334] "Generic (PLEG): container finished" podID="67135f13-b182-4e78-b64d-59e924cc6d06" containerID="0248869be7a45daf83343efdfb1d43f4b7d10b0b6b268922404b998ed0f2cd9e" exitCode=0 Mar 20 15:55:32 crc kubenswrapper[4730]: I0320 15:55:32.508382 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccfvp" event={"ID":"67135f13-b182-4e78-b64d-59e924cc6d06","Type":"ContainerDied","Data":"0248869be7a45daf83343efdfb1d43f4b7d10b0b6b268922404b998ed0f2cd9e"} Mar 20 15:55:32 crc kubenswrapper[4730]: I0320 15:55:32.508704 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccfvp" event={"ID":"67135f13-b182-4e78-b64d-59e924cc6d06","Type":"ContainerStarted","Data":"658a195d5ec687dd420baf8249b97d002c1f423d4390763b728a9e13cc7604b3"} Mar 20 15:55:33 crc kubenswrapper[4730]: I0320 15:55:33.516795 4730 generic.go:334] "Generic (PLEG): container finished" podID="67135f13-b182-4e78-b64d-59e924cc6d06" containerID="303fadbe91512351177d280b599a2b8cc89d2b5b2c6d9ee58ec15258affc5eb8" exitCode=0 Mar 20 15:55:33 crc kubenswrapper[4730]: I0320 15:55:33.516875 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccfvp" event={"ID":"67135f13-b182-4e78-b64d-59e924cc6d06","Type":"ContainerDied","Data":"303fadbe91512351177d280b599a2b8cc89d2b5b2c6d9ee58ec15258affc5eb8"} Mar 20 15:55:33 crc kubenswrapper[4730]: I0320 15:55:33.542746 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="146c64ad-b085-471a-a540-7faa5c6e969f" path="/var/lib/kubelet/pods/146c64ad-b085-471a-a540-7faa5c6e969f/volumes" Mar 20 15:55:33 crc kubenswrapper[4730]: I0320 15:55:33.591832 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vmgrx" Mar 20 15:55:34 crc kubenswrapper[4730]: I0320 15:55:34.952618 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-tbvnw" Mar 20 15:55:35 crc kubenswrapper[4730]: I0320 15:55:35.540862 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccfvp" event={"ID":"67135f13-b182-4e78-b64d-59e924cc6d06","Type":"ContainerStarted","Data":"6f9edd02be6f2dfb433a49be2863aba5cd96a3fbb691894959161dfee263cc66"} Mar 20 15:55:35 crc kubenswrapper[4730]: I0320 15:55:35.565599 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ccfvp" podStartSLOduration=3.023705403 podStartE2EDuration="5.565583858s" podCreationTimestamp="2026-03-20 15:55:30 +0000 UTC" firstStartedPulling="2026-03-20 15:55:32.510221615 +0000 UTC m=+991.723592994" lastFinishedPulling="2026-03-20 15:55:35.05210009 +0000 UTC m=+994.265471449" observedRunningTime="2026-03-20 15:55:35.558776014 +0000 UTC m=+994.772147383" watchObservedRunningTime="2026-03-20 15:55:35.565583858 +0000 UTC m=+994.778955217" Mar 20 15:55:37 crc kubenswrapper[4730]: I0320 15:55:37.825694 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6f444"] Mar 20 15:55:37 crc kubenswrapper[4730]: E0320 15:55:37.826157 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="146c64ad-b085-471a-a540-7faa5c6e969f" containerName="extract-utilities" Mar 20 15:55:37 crc kubenswrapper[4730]: I0320 15:55:37.826168 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="146c64ad-b085-471a-a540-7faa5c6e969f" containerName="extract-utilities" Mar 20 15:55:37 crc kubenswrapper[4730]: E0320 15:55:37.826176 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="146c64ad-b085-471a-a540-7faa5c6e969f" containerName="extract-content" Mar 20 15:55:37 crc kubenswrapper[4730]: I0320 15:55:37.826183 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="146c64ad-b085-471a-a540-7faa5c6e969f" containerName="extract-content" Mar 20 15:55:37 crc kubenswrapper[4730]: E0320 15:55:37.826196 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="146c64ad-b085-471a-a540-7faa5c6e969f" containerName="registry-server" Mar 20 15:55:37 crc kubenswrapper[4730]: I0320 15:55:37.826201 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="146c64ad-b085-471a-a540-7faa5c6e969f" containerName="registry-server" Mar 20 15:55:37 crc kubenswrapper[4730]: I0320 15:55:37.826321 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="146c64ad-b085-471a-a540-7faa5c6e969f" containerName="registry-server" Mar 20 15:55:37 crc kubenswrapper[4730]: I0320 15:55:37.826781 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6f444" Mar 20 15:55:37 crc kubenswrapper[4730]: I0320 15:55:37.828752 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-xkpgj" Mar 20 15:55:37 crc kubenswrapper[4730]: I0320 15:55:37.828893 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 20 15:55:37 crc kubenswrapper[4730]: I0320 15:55:37.828892 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 20 15:55:37 crc kubenswrapper[4730]: I0320 15:55:37.847847 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6f444"] Mar 20 15:55:37 crc kubenswrapper[4730]: I0320 15:55:37.978047 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhtb8\" (UniqueName: \"kubernetes.io/projected/ed34ebb3-3d6a-4ddf-8364-fd5b7baa6953-kube-api-access-fhtb8\") pod \"openstack-operator-index-6f444\" (UID: \"ed34ebb3-3d6a-4ddf-8364-fd5b7baa6953\") " pod="openstack-operators/openstack-operator-index-6f444" Mar 20 15:55:38 crc kubenswrapper[4730]: I0320 15:55:38.079413 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhtb8\" (UniqueName: \"kubernetes.io/projected/ed34ebb3-3d6a-4ddf-8364-fd5b7baa6953-kube-api-access-fhtb8\") pod \"openstack-operator-index-6f444\" (UID: \"ed34ebb3-3d6a-4ddf-8364-fd5b7baa6953\") " pod="openstack-operators/openstack-operator-index-6f444" Mar 20 15:55:38 crc kubenswrapper[4730]: I0320 15:55:38.113148 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhtb8\" (UniqueName: \"kubernetes.io/projected/ed34ebb3-3d6a-4ddf-8364-fd5b7baa6953-kube-api-access-fhtb8\") pod \"openstack-operator-index-6f444\" (UID: \"ed34ebb3-3d6a-4ddf-8364-fd5b7baa6953\") " pod="openstack-operators/openstack-operator-index-6f444" Mar 20 15:55:38 crc kubenswrapper[4730]: I0320 15:55:38.143291 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6f444" Mar 20 15:55:38 crc kubenswrapper[4730]: I0320 15:55:38.567013 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6f444"] Mar 20 15:55:38 crc kubenswrapper[4730]: W0320 15:55:38.570194 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded34ebb3_3d6a_4ddf_8364_fd5b7baa6953.slice/crio-32941a4fdec1fcd9654c63bca33d3659cc2cf1d8de808659c098b3722cb53ead WatchSource:0}: Error finding container 32941a4fdec1fcd9654c63bca33d3659cc2cf1d8de808659c098b3722cb53ead: Status 404 returned error can't find the container with id 32941a4fdec1fcd9654c63bca33d3659cc2cf1d8de808659c098b3722cb53ead Mar 20 15:55:39 crc kubenswrapper[4730]: I0320 15:55:39.561552 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6f444" event={"ID":"ed34ebb3-3d6a-4ddf-8364-fd5b7baa6953","Type":"ContainerStarted","Data":"32941a4fdec1fcd9654c63bca33d3659cc2cf1d8de808659c098b3722cb53ead"} Mar 20 15:55:41 crc kubenswrapper[4730]: I0320 15:55:41.056745 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ccfvp" Mar 20 15:55:41 crc kubenswrapper[4730]: I0320 15:55:41.056898 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ccfvp" Mar 20 15:55:41 crc kubenswrapper[4730]: I0320 15:55:41.105120 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ccfvp" Mar 20 15:55:41 crc kubenswrapper[4730]: I0320 15:55:41.574620 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6f444" event={"ID":"ed34ebb3-3d6a-4ddf-8364-fd5b7baa6953","Type":"ContainerStarted","Data":"36dbac8f732d4df3db5ae5bc7263b80ff9feb79f642eaa9edc01e075cd51f146"} Mar 20 15:55:41 crc kubenswrapper[4730]: I0320 15:55:41.601352 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6f444" podStartSLOduration=2.202520615 podStartE2EDuration="4.601332889s" podCreationTimestamp="2026-03-20 15:55:37 +0000 UTC" firstStartedPulling="2026-03-20 15:55:38.571930765 +0000 UTC m=+997.785302144" lastFinishedPulling="2026-03-20 15:55:40.970743049 +0000 UTC m=+1000.184114418" observedRunningTime="2026-03-20 15:55:41.59960695 +0000 UTC m=+1000.812978319" watchObservedRunningTime="2026-03-20 15:55:41.601332889 +0000 UTC m=+1000.814704258" Mar 20 15:55:41 crc kubenswrapper[4730]: I0320 15:55:41.693232 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-6f444"] Mar 20 15:55:41 crc kubenswrapper[4730]: I0320 15:55:41.702377 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ccfvp" Mar 20 15:55:42 crc kubenswrapper[4730]: I0320 15:55:42.499822 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-f9xcd"] Mar 20 15:55:42 crc kubenswrapper[4730]: I0320 15:55:42.501732 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-f9xcd" Mar 20 15:55:42 crc kubenswrapper[4730]: I0320 15:55:42.510433 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-f9xcd"] Mar 20 15:55:42 crc kubenswrapper[4730]: I0320 15:55:42.645249 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgg5s\" (UniqueName: \"kubernetes.io/projected/d125a115-3173-4a52-8794-2832951fa428-kube-api-access-kgg5s\") pod \"openstack-operator-index-f9xcd\" (UID: \"d125a115-3173-4a52-8794-2832951fa428\") " pod="openstack-operators/openstack-operator-index-f9xcd" Mar 20 15:55:42 crc kubenswrapper[4730]: I0320 15:55:42.748289 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgg5s\" (UniqueName: \"kubernetes.io/projected/d125a115-3173-4a52-8794-2832951fa428-kube-api-access-kgg5s\") pod \"openstack-operator-index-f9xcd\" (UID: \"d125a115-3173-4a52-8794-2832951fa428\") " pod="openstack-operators/openstack-operator-index-f9xcd" Mar 20 15:55:42 crc kubenswrapper[4730]: I0320 15:55:42.773566 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgg5s\" (UniqueName: \"kubernetes.io/projected/d125a115-3173-4a52-8794-2832951fa428-kube-api-access-kgg5s\") pod \"openstack-operator-index-f9xcd\" (UID: \"d125a115-3173-4a52-8794-2832951fa428\") " pod="openstack-operators/openstack-operator-index-f9xcd" Mar 20 15:55:42 crc kubenswrapper[4730]: I0320 15:55:42.823817 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-f9xcd" Mar 20 15:55:43 crc kubenswrapper[4730]: I0320 15:55:43.288385 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-f9xcd"] Mar 20 15:55:43 crc kubenswrapper[4730]: I0320 15:55:43.288858 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-pbr4w" Mar 20 15:55:43 crc kubenswrapper[4730]: I0320 15:55:43.588890 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-f9xcd" event={"ID":"d125a115-3173-4a52-8794-2832951fa428","Type":"ContainerStarted","Data":"0bc71ca5af35eb6dd545efbdcabf0b6ae657161f87c08e6fd6b7d141c6a33724"} Mar 20 15:55:43 crc kubenswrapper[4730]: I0320 15:55:43.589307 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-f9xcd" event={"ID":"d125a115-3173-4a52-8794-2832951fa428","Type":"ContainerStarted","Data":"9686c2d37b01c0de22f81c267b06b28451fc2e46e0983435c2d9281955e8f5ff"} Mar 20 15:55:43 crc kubenswrapper[4730]: I0320 15:55:43.588956 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-6f444" podUID="ed34ebb3-3d6a-4ddf-8364-fd5b7baa6953" containerName="registry-server" containerID="cri-o://36dbac8f732d4df3db5ae5bc7263b80ff9feb79f642eaa9edc01e075cd51f146" gracePeriod=2 Mar 20 15:55:43 crc kubenswrapper[4730]: I0320 15:55:43.614193 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-f9xcd" podStartSLOduration=1.546950824 podStartE2EDuration="1.614172472s" podCreationTimestamp="2026-03-20 15:55:42 +0000 UTC" firstStartedPulling="2026-03-20 15:55:43.292473164 +0000 UTC m=+1002.505844573" lastFinishedPulling="2026-03-20 15:55:43.359694852 +0000 UTC m=+1002.573066221" observedRunningTime="2026-03-20 15:55:43.608961753 +0000 UTC m=+1002.822333152" watchObservedRunningTime="2026-03-20 15:55:43.614172472 +0000 UTC m=+1002.827543851" Mar 20 15:55:44 crc kubenswrapper[4730]: I0320 15:55:44.058067 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6f444" Mar 20 15:55:44 crc kubenswrapper[4730]: I0320 15:55:44.177359 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhtb8\" (UniqueName: \"kubernetes.io/projected/ed34ebb3-3d6a-4ddf-8364-fd5b7baa6953-kube-api-access-fhtb8\") pod \"ed34ebb3-3d6a-4ddf-8364-fd5b7baa6953\" (UID: \"ed34ebb3-3d6a-4ddf-8364-fd5b7baa6953\") " Mar 20 15:55:44 crc kubenswrapper[4730]: I0320 15:55:44.185494 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed34ebb3-3d6a-4ddf-8364-fd5b7baa6953-kube-api-access-fhtb8" (OuterVolumeSpecName: "kube-api-access-fhtb8") pod "ed34ebb3-3d6a-4ddf-8364-fd5b7baa6953" (UID: "ed34ebb3-3d6a-4ddf-8364-fd5b7baa6953"). InnerVolumeSpecName "kube-api-access-fhtb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:55:44 crc kubenswrapper[4730]: I0320 15:55:44.278915 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhtb8\" (UniqueName: \"kubernetes.io/projected/ed34ebb3-3d6a-4ddf-8364-fd5b7baa6953-kube-api-access-fhtb8\") on node \"crc\" DevicePath \"\"" Mar 20 15:55:44 crc kubenswrapper[4730]: I0320 15:55:44.596153 4730 generic.go:334] "Generic (PLEG): container finished" podID="ed34ebb3-3d6a-4ddf-8364-fd5b7baa6953" containerID="36dbac8f732d4df3db5ae5bc7263b80ff9feb79f642eaa9edc01e075cd51f146" exitCode=0 Mar 20 15:55:44 crc kubenswrapper[4730]: I0320 15:55:44.596865 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6f444" Mar 20 15:55:44 crc kubenswrapper[4730]: I0320 15:55:44.598329 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6f444" event={"ID":"ed34ebb3-3d6a-4ddf-8364-fd5b7baa6953","Type":"ContainerDied","Data":"36dbac8f732d4df3db5ae5bc7263b80ff9feb79f642eaa9edc01e075cd51f146"} Mar 20 15:55:44 crc kubenswrapper[4730]: I0320 15:55:44.598378 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6f444" event={"ID":"ed34ebb3-3d6a-4ddf-8364-fd5b7baa6953","Type":"ContainerDied","Data":"32941a4fdec1fcd9654c63bca33d3659cc2cf1d8de808659c098b3722cb53ead"} Mar 20 15:55:44 crc kubenswrapper[4730]: I0320 15:55:44.598399 4730 scope.go:117] "RemoveContainer" containerID="36dbac8f732d4df3db5ae5bc7263b80ff9feb79f642eaa9edc01e075cd51f146" Mar 20 15:55:44 crc kubenswrapper[4730]: I0320 15:55:44.620501 4730 scope.go:117] "RemoveContainer" containerID="36dbac8f732d4df3db5ae5bc7263b80ff9feb79f642eaa9edc01e075cd51f146" Mar 20 15:55:44 crc kubenswrapper[4730]: E0320 15:55:44.621761 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36dbac8f732d4df3db5ae5bc7263b80ff9feb79f642eaa9edc01e075cd51f146\": container with ID starting with 36dbac8f732d4df3db5ae5bc7263b80ff9feb79f642eaa9edc01e075cd51f146 not found: ID does not exist" containerID="36dbac8f732d4df3db5ae5bc7263b80ff9feb79f642eaa9edc01e075cd51f146" Mar 20 15:55:44 crc kubenswrapper[4730]: I0320 15:55:44.621807 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36dbac8f732d4df3db5ae5bc7263b80ff9feb79f642eaa9edc01e075cd51f146"} err="failed to get container status \"36dbac8f732d4df3db5ae5bc7263b80ff9feb79f642eaa9edc01e075cd51f146\": rpc error: code = NotFound desc = could not find container \"36dbac8f732d4df3db5ae5bc7263b80ff9feb79f642eaa9edc01e075cd51f146\": container with ID starting with 36dbac8f732d4df3db5ae5bc7263b80ff9feb79f642eaa9edc01e075cd51f146 not found: ID does not exist" Mar 20 15:55:44 crc kubenswrapper[4730]: I0320 15:55:44.633199 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-6f444"] Mar 20 15:55:44 crc kubenswrapper[4730]: I0320 15:55:44.645818 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-6f444"] Mar 20 15:55:45 crc kubenswrapper[4730]: I0320 15:55:45.486153 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ccfvp"] Mar 20 15:55:45 crc kubenswrapper[4730]: I0320 15:55:45.486754 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ccfvp" podUID="67135f13-b182-4e78-b64d-59e924cc6d06" containerName="registry-server" containerID="cri-o://6f9edd02be6f2dfb433a49be2863aba5cd96a3fbb691894959161dfee263cc66" gracePeriod=2 Mar 20 15:55:45 crc kubenswrapper[4730]: I0320 15:55:45.542176 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed34ebb3-3d6a-4ddf-8364-fd5b7baa6953" path="/var/lib/kubelet/pods/ed34ebb3-3d6a-4ddf-8364-fd5b7baa6953/volumes" Mar 20 15:55:45 crc kubenswrapper[4730]: I0320 15:55:45.611429 4730 generic.go:334] "Generic (PLEG): container finished" podID="67135f13-b182-4e78-b64d-59e924cc6d06" containerID="6f9edd02be6f2dfb433a49be2863aba5cd96a3fbb691894959161dfee263cc66" exitCode=0 Mar 20 15:55:45 crc kubenswrapper[4730]: I0320 15:55:45.611479 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccfvp" event={"ID":"67135f13-b182-4e78-b64d-59e924cc6d06","Type":"ContainerDied","Data":"6f9edd02be6f2dfb433a49be2863aba5cd96a3fbb691894959161dfee263cc66"} Mar 20 15:55:45 crc kubenswrapper[4730]: I0320 15:55:45.828806 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ccfvp" Mar 20 15:55:45 crc kubenswrapper[4730]: I0320 15:55:45.904858 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67135f13-b182-4e78-b64d-59e924cc6d06-utilities\") pod \"67135f13-b182-4e78-b64d-59e924cc6d06\" (UID: \"67135f13-b182-4e78-b64d-59e924cc6d06\") " Mar 20 15:55:45 crc kubenswrapper[4730]: I0320 15:55:45.904945 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67135f13-b182-4e78-b64d-59e924cc6d06-catalog-content\") pod \"67135f13-b182-4e78-b64d-59e924cc6d06\" (UID: \"67135f13-b182-4e78-b64d-59e924cc6d06\") " Mar 20 15:55:45 crc kubenswrapper[4730]: I0320 15:55:45.905018 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzjls\" (UniqueName: \"kubernetes.io/projected/67135f13-b182-4e78-b64d-59e924cc6d06-kube-api-access-kzjls\") pod \"67135f13-b182-4e78-b64d-59e924cc6d06\" (UID: \"67135f13-b182-4e78-b64d-59e924cc6d06\") " Mar 20 15:55:45 crc kubenswrapper[4730]: I0320 15:55:45.906016 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67135f13-b182-4e78-b64d-59e924cc6d06-utilities" (OuterVolumeSpecName: "utilities") pod "67135f13-b182-4e78-b64d-59e924cc6d06" (UID: "67135f13-b182-4e78-b64d-59e924cc6d06"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:55:45 crc kubenswrapper[4730]: I0320 15:55:45.920425 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67135f13-b182-4e78-b64d-59e924cc6d06-kube-api-access-kzjls" (OuterVolumeSpecName: "kube-api-access-kzjls") pod "67135f13-b182-4e78-b64d-59e924cc6d06" (UID: "67135f13-b182-4e78-b64d-59e924cc6d06"). InnerVolumeSpecName "kube-api-access-kzjls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:55:45 crc kubenswrapper[4730]: I0320 15:55:45.964079 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67135f13-b182-4e78-b64d-59e924cc6d06-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67135f13-b182-4e78-b64d-59e924cc6d06" (UID: "67135f13-b182-4e78-b64d-59e924cc6d06"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:55:46 crc kubenswrapper[4730]: I0320 15:55:46.006358 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzjls\" (UniqueName: \"kubernetes.io/projected/67135f13-b182-4e78-b64d-59e924cc6d06-kube-api-access-kzjls\") on node \"crc\" DevicePath \"\"" Mar 20 15:55:46 crc kubenswrapper[4730]: I0320 15:55:46.006593 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67135f13-b182-4e78-b64d-59e924cc6d06-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 15:55:46 crc kubenswrapper[4730]: I0320 15:55:46.006652 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67135f13-b182-4e78-b64d-59e924cc6d06-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 15:55:46 crc kubenswrapper[4730]: I0320 15:55:46.623022 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccfvp" event={"ID":"67135f13-b182-4e78-b64d-59e924cc6d06","Type":"ContainerDied","Data":"658a195d5ec687dd420baf8249b97d002c1f423d4390763b728a9e13cc7604b3"} Mar 20 15:55:46 crc kubenswrapper[4730]: I0320 15:55:46.623094 4730 scope.go:117] "RemoveContainer" containerID="6f9edd02be6f2dfb433a49be2863aba5cd96a3fbb691894959161dfee263cc66" Mar 20 15:55:46 crc kubenswrapper[4730]: I0320 15:55:46.623177 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ccfvp" Mar 20 15:55:46 crc kubenswrapper[4730]: I0320 15:55:46.655585 4730 scope.go:117] "RemoveContainer" containerID="303fadbe91512351177d280b599a2b8cc89d2b5b2c6d9ee58ec15258affc5eb8" Mar 20 15:55:46 crc kubenswrapper[4730]: I0320 15:55:46.677539 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ccfvp"] Mar 20 15:55:46 crc kubenswrapper[4730]: I0320 15:55:46.678581 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ccfvp"] Mar 20 15:55:46 crc kubenswrapper[4730]: I0320 15:55:46.700571 4730 scope.go:117] "RemoveContainer" containerID="0248869be7a45daf83343efdfb1d43f4b7d10b0b6b268922404b998ed0f2cd9e" Mar 20 15:55:47 crc kubenswrapper[4730]: I0320 15:55:47.542582 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67135f13-b182-4e78-b64d-59e924cc6d06" path="/var/lib/kubelet/pods/67135f13-b182-4e78-b64d-59e924cc6d06/volumes" Mar 20 15:55:52 crc kubenswrapper[4730]: I0320 15:55:52.824841 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-f9xcd" Mar 20 15:55:52 crc kubenswrapper[4730]: I0320 15:55:52.825305 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-f9xcd" Mar 20 15:55:52 crc kubenswrapper[4730]: I0320 15:55:52.863578 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-f9xcd" Mar 20 15:55:53 crc kubenswrapper[4730]: I0320 15:55:53.706892 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-f9xcd" Mar 20 15:56:00 crc kubenswrapper[4730]: I0320 15:56:00.132883 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567036-mnkd8"] Mar 20 15:56:00 crc kubenswrapper[4730]: E0320 15:56:00.133659 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed34ebb3-3d6a-4ddf-8364-fd5b7baa6953" containerName="registry-server" Mar 20 15:56:00 crc kubenswrapper[4730]: I0320 15:56:00.133671 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed34ebb3-3d6a-4ddf-8364-fd5b7baa6953" containerName="registry-server" Mar 20 15:56:00 crc kubenswrapper[4730]: E0320 15:56:00.133679 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67135f13-b182-4e78-b64d-59e924cc6d06" containerName="extract-content" Mar 20 15:56:00 crc kubenswrapper[4730]: I0320 15:56:00.133685 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="67135f13-b182-4e78-b64d-59e924cc6d06" containerName="extract-content" Mar 20 15:56:00 crc kubenswrapper[4730]: E0320 15:56:00.133696 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67135f13-b182-4e78-b64d-59e924cc6d06" containerName="extract-utilities" Mar 20 15:56:00 crc kubenswrapper[4730]: I0320 15:56:00.133703 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="67135f13-b182-4e78-b64d-59e924cc6d06" containerName="extract-utilities" Mar 20 15:56:00 crc kubenswrapper[4730]: E0320 15:56:00.133713 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67135f13-b182-4e78-b64d-59e924cc6d06" containerName="registry-server" Mar 20 15:56:00 crc kubenswrapper[4730]: I0320 15:56:00.133718 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="67135f13-b182-4e78-b64d-59e924cc6d06" containerName="registry-server" Mar 20 15:56:00 crc kubenswrapper[4730]: I0320 15:56:00.133816 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="67135f13-b182-4e78-b64d-59e924cc6d06" containerName="registry-server" Mar 20 15:56:00 crc kubenswrapper[4730]: I0320 15:56:00.133827 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed34ebb3-3d6a-4ddf-8364-fd5b7baa6953" containerName="registry-server" Mar 20 15:56:00 crc kubenswrapper[4730]: I0320 15:56:00.134550 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567036-mnkd8" Mar 20 15:56:00 crc kubenswrapper[4730]: I0320 15:56:00.136929 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 15:56:00 crc kubenswrapper[4730]: I0320 15:56:00.136998 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 15:56:00 crc kubenswrapper[4730]: I0320 15:56:00.137140 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567036-mnkd8"] Mar 20 15:56:00 crc kubenswrapper[4730]: I0320 15:56:00.137228 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 15:56:00 crc kubenswrapper[4730]: I0320 15:56:00.200290 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw9d9\" (UniqueName: \"kubernetes.io/projected/889da12d-843a-4c71-8d48-cbb0360b024a-kube-api-access-xw9d9\") pod \"auto-csr-approver-29567036-mnkd8\" (UID: \"889da12d-843a-4c71-8d48-cbb0360b024a\") " pod="openshift-infra/auto-csr-approver-29567036-mnkd8" Mar 20 15:56:00 crc kubenswrapper[4730]: I0320 15:56:00.301768 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw9d9\" (UniqueName: \"kubernetes.io/projected/889da12d-843a-4c71-8d48-cbb0360b024a-kube-api-access-xw9d9\") pod \"auto-csr-approver-29567036-mnkd8\" (UID: \"889da12d-843a-4c71-8d48-cbb0360b024a\") " pod="openshift-infra/auto-csr-approver-29567036-mnkd8" Mar 20 15:56:00 crc kubenswrapper[4730]: I0320 15:56:00.318544 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw9d9\" (UniqueName: \"kubernetes.io/projected/889da12d-843a-4c71-8d48-cbb0360b024a-kube-api-access-xw9d9\") pod \"auto-csr-approver-29567036-mnkd8\" (UID: \"889da12d-843a-4c71-8d48-cbb0360b024a\") " pod="openshift-infra/auto-csr-approver-29567036-mnkd8" Mar 20 15:56:00 crc kubenswrapper[4730]: I0320 15:56:00.450558 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567036-mnkd8" Mar 20 15:56:00 crc kubenswrapper[4730]: I0320 15:56:00.870921 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567036-mnkd8"] Mar 20 15:56:01 crc kubenswrapper[4730]: I0320 15:56:01.162216 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt"] Mar 20 15:56:01 crc kubenswrapper[4730]: I0320 15:56:01.167622 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt"] Mar 20 15:56:01 crc kubenswrapper[4730]: I0320 15:56:01.167718 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt" Mar 20 15:56:01 crc kubenswrapper[4730]: I0320 15:56:01.169984 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-bznvq" Mar 20 15:56:01 crc kubenswrapper[4730]: I0320 15:56:01.319375 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a-bundle\") pod \"6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt\" (UID: \"8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a\") " pod="openstack-operators/6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt" Mar 20 15:56:01 crc kubenswrapper[4730]: I0320 15:56:01.319469 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a-util\") pod \"6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt\" (UID: \"8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a\") " pod="openstack-operators/6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt" Mar 20 15:56:01 crc kubenswrapper[4730]: I0320 15:56:01.319593 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27dcc\" (UniqueName: \"kubernetes.io/projected/8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a-kube-api-access-27dcc\") pod \"6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt\" (UID: \"8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a\") " pod="openstack-operators/6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt" Mar 20 15:56:01 crc kubenswrapper[4730]: I0320 15:56:01.420867 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a-bundle\") pod \"6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt\" (UID: \"8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a\") " pod="openstack-operators/6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt" Mar 20 15:56:01 crc kubenswrapper[4730]: I0320 15:56:01.421145 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a-util\") pod \"6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt\" (UID: \"8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a\") " pod="openstack-operators/6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt" Mar 20 15:56:01 crc kubenswrapper[4730]: I0320 15:56:01.421300 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27dcc\" (UniqueName: \"kubernetes.io/projected/8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a-kube-api-access-27dcc\") pod \"6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt\" (UID: \"8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a\") " pod="openstack-operators/6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt" Mar 20 15:56:01 crc kubenswrapper[4730]: I0320 15:56:01.421768 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a-util\") pod \"6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt\" (UID: \"8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a\") " pod="openstack-operators/6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt" Mar 20 15:56:01 crc kubenswrapper[4730]: I0320 15:56:01.421945 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a-bundle\") pod \"6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt\" (UID: \"8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a\") " pod="openstack-operators/6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt" Mar 20 15:56:01 crc kubenswrapper[4730]: I0320 15:56:01.447017 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27dcc\" (UniqueName: \"kubernetes.io/projected/8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a-kube-api-access-27dcc\") pod \"6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt\" (UID: \"8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a\") " pod="openstack-operators/6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt" Mar 20 15:56:01 crc kubenswrapper[4730]: I0320 15:56:01.488582 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-bznvq" Mar 20 15:56:01 crc kubenswrapper[4730]: I0320 15:56:01.497630 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt" Mar 20 15:56:01 crc kubenswrapper[4730]: I0320 15:56:01.716160 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt"] Mar 20 15:56:01 crc kubenswrapper[4730]: W0320 15:56:01.722837 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b2334c1_9644_4fd3_9ea3_984ebcd8dc5a.slice/crio-39f50a0bde24a3fc4c9b9f752f5c3f4ddc1714b08bce4063cca2dec5f9e70995 WatchSource:0}: Error finding container 39f50a0bde24a3fc4c9b9f752f5c3f4ddc1714b08bce4063cca2dec5f9e70995: Status 404 returned error can't find the container with id 39f50a0bde24a3fc4c9b9f752f5c3f4ddc1714b08bce4063cca2dec5f9e70995 Mar 20 15:56:01 crc kubenswrapper[4730]: I0320 15:56:01.732945 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567036-mnkd8" event={"ID":"889da12d-843a-4c71-8d48-cbb0360b024a","Type":"ContainerStarted","Data":"1b28fd03418b6f353e6627eaef64c3d1916825ee1c8a46df8c737da3eb3b606a"} Mar 20 15:56:02 crc kubenswrapper[4730]: I0320 15:56:02.739315 4730 generic.go:334] "Generic (PLEG): container finished" podID="8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a" containerID="8cf0199ce2da0157ec10f0f6d25409b2da9972c181577bd215ad3f0cf65e50f1" exitCode=0 Mar 20 15:56:02 crc kubenswrapper[4730]: I0320 15:56:02.739410 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt" event={"ID":"8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a","Type":"ContainerDied","Data":"8cf0199ce2da0157ec10f0f6d25409b2da9972c181577bd215ad3f0cf65e50f1"} Mar 20 15:56:02 crc kubenswrapper[4730]: I0320 15:56:02.739694 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt" event={"ID":"8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a","Type":"ContainerStarted","Data":"39f50a0bde24a3fc4c9b9f752f5c3f4ddc1714b08bce4063cca2dec5f9e70995"} Mar 20 15:56:02 crc kubenswrapper[4730]: I0320 15:56:02.743667 4730 generic.go:334] "Generic (PLEG): container finished" podID="889da12d-843a-4c71-8d48-cbb0360b024a" containerID="2ec54c009b326db4c49da642b8ab1232405aacb430ead248fe894a34dfe7c452" exitCode=0 Mar 20 15:56:02 crc kubenswrapper[4730]: I0320 15:56:02.743706 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567036-mnkd8" event={"ID":"889da12d-843a-4c71-8d48-cbb0360b024a","Type":"ContainerDied","Data":"2ec54c009b326db4c49da642b8ab1232405aacb430ead248fe894a34dfe7c452"} Mar 20 15:56:03 crc kubenswrapper[4730]: I0320 15:56:03.752016 4730 generic.go:334] "Generic (PLEG): container finished" podID="8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a" containerID="94f44e77a5dcfdbd97bb6208822f02fb0b29dd74e63ad2a09b0cde739252bf64" exitCode=0 Mar 20 15:56:03 crc kubenswrapper[4730]: I0320 15:56:03.752068 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt" event={"ID":"8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a","Type":"ContainerDied","Data":"94f44e77a5dcfdbd97bb6208822f02fb0b29dd74e63ad2a09b0cde739252bf64"} Mar 20 15:56:04 crc kubenswrapper[4730]: I0320 15:56:04.013087 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567036-mnkd8" Mar 20 15:56:04 crc kubenswrapper[4730]: I0320 15:56:04.158513 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw9d9\" (UniqueName: \"kubernetes.io/projected/889da12d-843a-4c71-8d48-cbb0360b024a-kube-api-access-xw9d9\") pod \"889da12d-843a-4c71-8d48-cbb0360b024a\" (UID: \"889da12d-843a-4c71-8d48-cbb0360b024a\") " Mar 20 15:56:04 crc kubenswrapper[4730]: I0320 15:56:04.163879 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/889da12d-843a-4c71-8d48-cbb0360b024a-kube-api-access-xw9d9" (OuterVolumeSpecName: "kube-api-access-xw9d9") pod "889da12d-843a-4c71-8d48-cbb0360b024a" (UID: "889da12d-843a-4c71-8d48-cbb0360b024a"). InnerVolumeSpecName "kube-api-access-xw9d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:56:04 crc kubenswrapper[4730]: I0320 15:56:04.260107 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw9d9\" (UniqueName: \"kubernetes.io/projected/889da12d-843a-4c71-8d48-cbb0360b024a-kube-api-access-xw9d9\") on node \"crc\" DevicePath \"\"" Mar 20 15:56:04 crc kubenswrapper[4730]: I0320 15:56:04.761380 4730 generic.go:334] "Generic (PLEG): container finished" podID="8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a" containerID="30e4a30153a37b946ecc906e059164d0866e21ff8e26907ff74c60a8683c99e7" exitCode=0 Mar 20 15:56:04 crc kubenswrapper[4730]: I0320 15:56:04.761478 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt" event={"ID":"8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a","Type":"ContainerDied","Data":"30e4a30153a37b946ecc906e059164d0866e21ff8e26907ff74c60a8683c99e7"} Mar 20 15:56:04 crc kubenswrapper[4730]: I0320 15:56:04.764705 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567036-mnkd8" event={"ID":"889da12d-843a-4c71-8d48-cbb0360b024a","Type":"ContainerDied","Data":"1b28fd03418b6f353e6627eaef64c3d1916825ee1c8a46df8c737da3eb3b606a"} Mar 20 15:56:04 crc kubenswrapper[4730]: I0320 15:56:04.764740 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567036-mnkd8" Mar 20 15:56:04 crc kubenswrapper[4730]: I0320 15:56:04.764752 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b28fd03418b6f353e6627eaef64c3d1916825ee1c8a46df8c737da3eb3b606a" Mar 20 15:56:05 crc kubenswrapper[4730]: I0320 15:56:05.074292 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567030-pwdln"] Mar 20 15:56:05 crc kubenswrapper[4730]: I0320 15:56:05.074963 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567030-pwdln"] Mar 20 15:56:05 crc kubenswrapper[4730]: I0320 15:56:05.539537 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7dcd73b-be94-4b96-b001-593d2fd56aa3" path="/var/lib/kubelet/pods/b7dcd73b-be94-4b96-b001-593d2fd56aa3/volumes" Mar 20 15:56:06 crc kubenswrapper[4730]: I0320 15:56:06.008347 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt" Mar 20 15:56:06 crc kubenswrapper[4730]: I0320 15:56:06.083040 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27dcc\" (UniqueName: \"kubernetes.io/projected/8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a-kube-api-access-27dcc\") pod \"8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a\" (UID: \"8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a\") " Mar 20 15:56:06 crc kubenswrapper[4730]: I0320 15:56:06.083203 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a-util\") pod \"8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a\" (UID: \"8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a\") " Mar 20 15:56:06 crc kubenswrapper[4730]: I0320 15:56:06.083240 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a-bundle\") pod \"8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a\" (UID: \"8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a\") " Mar 20 15:56:06 crc kubenswrapper[4730]: I0320 15:56:06.084010 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a-bundle" (OuterVolumeSpecName: "bundle") pod "8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a" (UID: "8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:56:06 crc kubenswrapper[4730]: I0320 15:56:06.093556 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a-kube-api-access-27dcc" (OuterVolumeSpecName: "kube-api-access-27dcc") pod "8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a" (UID: "8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a"). InnerVolumeSpecName "kube-api-access-27dcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:56:06 crc kubenswrapper[4730]: I0320 15:56:06.096674 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a-util" (OuterVolumeSpecName: "util") pod "8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a" (UID: "8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:56:06 crc kubenswrapper[4730]: I0320 15:56:06.184838 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27dcc\" (UniqueName: \"kubernetes.io/projected/8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a-kube-api-access-27dcc\") on node \"crc\" DevicePath \"\"" Mar 20 15:56:06 crc kubenswrapper[4730]: I0320 15:56:06.184869 4730 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a-util\") on node \"crc\" DevicePath \"\"" Mar 20 15:56:06 crc kubenswrapper[4730]: I0320 15:56:06.184880 4730 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 15:56:06 crc kubenswrapper[4730]: I0320 15:56:06.779230 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt" event={"ID":"8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a","Type":"ContainerDied","Data":"39f50a0bde24a3fc4c9b9f752f5c3f4ddc1714b08bce4063cca2dec5f9e70995"} Mar 20 15:56:06 crc kubenswrapper[4730]: I0320 15:56:06.779315 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39f50a0bde24a3fc4c9b9f752f5c3f4ddc1714b08bce4063cca2dec5f9e70995" Mar 20 15:56:06 crc kubenswrapper[4730]: I0320 15:56:06.779362 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt" Mar 20 15:56:13 crc kubenswrapper[4730]: I0320 15:56:13.829003 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-646f48576b-5p6h9"] Mar 20 15:56:13 crc kubenswrapper[4730]: E0320 15:56:13.829583 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a" containerName="extract" Mar 20 15:56:13 crc kubenswrapper[4730]: I0320 15:56:13.829596 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a" containerName="extract" Mar 20 15:56:13 crc kubenswrapper[4730]: E0320 15:56:13.829614 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a" containerName="util" Mar 20 15:56:13 crc kubenswrapper[4730]: I0320 15:56:13.829620 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a" containerName="util" Mar 20 15:56:13 crc kubenswrapper[4730]: E0320 15:56:13.829633 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="889da12d-843a-4c71-8d48-cbb0360b024a" containerName="oc" Mar 20 15:56:13 crc kubenswrapper[4730]: I0320 15:56:13.829639 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="889da12d-843a-4c71-8d48-cbb0360b024a" containerName="oc" Mar 20 15:56:13 crc kubenswrapper[4730]: E0320 15:56:13.829651 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a" containerName="pull" Mar 20 15:56:13 crc kubenswrapper[4730]: I0320 15:56:13.829656 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a" containerName="pull" Mar 20 15:56:13 crc kubenswrapper[4730]: I0320 15:56:13.829755 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="889da12d-843a-4c71-8d48-cbb0360b024a" containerName="oc" Mar 20 15:56:13 crc kubenswrapper[4730]: I0320 15:56:13.829767 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a" containerName="extract" Mar 20 15:56:13 crc kubenswrapper[4730]: I0320 15:56:13.830138 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-646f48576b-5p6h9" Mar 20 15:56:13 crc kubenswrapper[4730]: I0320 15:56:13.833232 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-pnth6" Mar 20 15:56:13 crc kubenswrapper[4730]: I0320 15:56:13.855870 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-646f48576b-5p6h9"] Mar 20 15:56:14 crc kubenswrapper[4730]: I0320 15:56:14.026663 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s9rh\" (UniqueName: \"kubernetes.io/projected/d85bb2c7-8dba-4091-a6cf-12cf58bf64a9-kube-api-access-5s9rh\") pod \"openstack-operator-controller-init-646f48576b-5p6h9\" (UID: \"d85bb2c7-8dba-4091-a6cf-12cf58bf64a9\") " pod="openstack-operators/openstack-operator-controller-init-646f48576b-5p6h9" Mar 20 15:56:14 crc kubenswrapper[4730]: I0320 15:56:14.127662 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s9rh\" (UniqueName: \"kubernetes.io/projected/d85bb2c7-8dba-4091-a6cf-12cf58bf64a9-kube-api-access-5s9rh\") pod \"openstack-operator-controller-init-646f48576b-5p6h9\" (UID: \"d85bb2c7-8dba-4091-a6cf-12cf58bf64a9\") " pod="openstack-operators/openstack-operator-controller-init-646f48576b-5p6h9" Mar 20 15:56:14 crc kubenswrapper[4730]: I0320 15:56:14.154483 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s9rh\" (UniqueName: \"kubernetes.io/projected/d85bb2c7-8dba-4091-a6cf-12cf58bf64a9-kube-api-access-5s9rh\") pod \"openstack-operator-controller-init-646f48576b-5p6h9\" (UID: \"d85bb2c7-8dba-4091-a6cf-12cf58bf64a9\") " pod="openstack-operators/openstack-operator-controller-init-646f48576b-5p6h9" Mar 20 15:56:14 crc kubenswrapper[4730]: I0320 15:56:14.220932 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-646f48576b-5p6h9" Mar 20 15:56:14 crc kubenswrapper[4730]: I0320 15:56:14.660609 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-646f48576b-5p6h9"] Mar 20 15:56:14 crc kubenswrapper[4730]: W0320 15:56:14.677636 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd85bb2c7_8dba_4091_a6cf_12cf58bf64a9.slice/crio-6d30eeb068a680ff0fac19656d50ead28e6e692ed91f698dd2150db25654db6b WatchSource:0}: Error finding container 6d30eeb068a680ff0fac19656d50ead28e6e692ed91f698dd2150db25654db6b: Status 404 returned error can't find the container with id 6d30eeb068a680ff0fac19656d50ead28e6e692ed91f698dd2150db25654db6b Mar 20 15:56:14 crc kubenswrapper[4730]: I0320 15:56:14.827360 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-646f48576b-5p6h9" event={"ID":"d85bb2c7-8dba-4091-a6cf-12cf58bf64a9","Type":"ContainerStarted","Data":"6d30eeb068a680ff0fac19656d50ead28e6e692ed91f698dd2150db25654db6b"} Mar 20 15:56:18 crc kubenswrapper[4730]: I0320 15:56:18.854209 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-646f48576b-5p6h9" event={"ID":"d85bb2c7-8dba-4091-a6cf-12cf58bf64a9","Type":"ContainerStarted","Data":"ef52146127814dd76e89f2192a0ab5609649d658f1ece4c6ac4cec32b6563640"} Mar 20 15:56:18 crc kubenswrapper[4730]: I0320 15:56:18.854774 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-646f48576b-5p6h9" Mar 20 15:56:18 crc kubenswrapper[4730]: I0320 15:56:18.907197 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-646f48576b-5p6h9" podStartSLOduration=2.347632943 podStartE2EDuration="5.907176492s" podCreationTimestamp="2026-03-20 15:56:13 +0000 UTC" firstStartedPulling="2026-03-20 15:56:14.680970405 +0000 UTC m=+1033.894341774" lastFinishedPulling="2026-03-20 15:56:18.240513944 +0000 UTC m=+1037.453885323" observedRunningTime="2026-03-20 15:56:18.90184332 +0000 UTC m=+1038.115214689" watchObservedRunningTime="2026-03-20 15:56:18.907176492 +0000 UTC m=+1038.120547861" Mar 20 15:56:24 crc kubenswrapper[4730]: I0320 15:56:24.224784 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-646f48576b-5p6h9" Mar 20 15:56:28 crc kubenswrapper[4730]: I0320 15:56:28.498810 4730 scope.go:117] "RemoveContainer" containerID="db53fcef559ab1b37329ca537473be13177cc4e3055c12b3c5b8536921ff4616" Mar 20 15:56:40 crc kubenswrapper[4730]: I0320 15:56:40.223659 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kgdl9"] Mar 20 15:56:40 crc kubenswrapper[4730]: I0320 15:56:40.225749 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kgdl9" Mar 20 15:56:40 crc kubenswrapper[4730]: I0320 15:56:40.236570 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kgdl9"] Mar 20 15:56:40 crc kubenswrapper[4730]: I0320 15:56:40.314668 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2133f9df-adc5-426d-87eb-b229d518b130-utilities\") pod \"certified-operators-kgdl9\" (UID: \"2133f9df-adc5-426d-87eb-b229d518b130\") " pod="openshift-marketplace/certified-operators-kgdl9" Mar 20 15:56:40 crc kubenswrapper[4730]: I0320 15:56:40.314712 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfpff\" (UniqueName: \"kubernetes.io/projected/2133f9df-adc5-426d-87eb-b229d518b130-kube-api-access-qfpff\") pod \"certified-operators-kgdl9\" (UID: \"2133f9df-adc5-426d-87eb-b229d518b130\") " pod="openshift-marketplace/certified-operators-kgdl9" Mar 20 15:56:40 crc kubenswrapper[4730]: I0320 15:56:40.314732 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2133f9df-adc5-426d-87eb-b229d518b130-catalog-content\") pod \"certified-operators-kgdl9\" (UID: \"2133f9df-adc5-426d-87eb-b229d518b130\") " pod="openshift-marketplace/certified-operators-kgdl9" Mar 20 15:56:40 crc kubenswrapper[4730]: I0320 15:56:40.415792 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2133f9df-adc5-426d-87eb-b229d518b130-utilities\") pod \"certified-operators-kgdl9\" (UID: \"2133f9df-adc5-426d-87eb-b229d518b130\") " pod="openshift-marketplace/certified-operators-kgdl9" Mar 20 15:56:40 crc kubenswrapper[4730]: I0320 15:56:40.415845 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfpff\" (UniqueName: \"kubernetes.io/projected/2133f9df-adc5-426d-87eb-b229d518b130-kube-api-access-qfpff\") pod \"certified-operators-kgdl9\" (UID: \"2133f9df-adc5-426d-87eb-b229d518b130\") " pod="openshift-marketplace/certified-operators-kgdl9" Mar 20 15:56:40 crc kubenswrapper[4730]: I0320 15:56:40.415866 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2133f9df-adc5-426d-87eb-b229d518b130-catalog-content\") pod \"certified-operators-kgdl9\" (UID: \"2133f9df-adc5-426d-87eb-b229d518b130\") " pod="openshift-marketplace/certified-operators-kgdl9" Mar 20 15:56:40 crc kubenswrapper[4730]: I0320 15:56:40.416352 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2133f9df-adc5-426d-87eb-b229d518b130-utilities\") pod \"certified-operators-kgdl9\" (UID: \"2133f9df-adc5-426d-87eb-b229d518b130\") " pod="openshift-marketplace/certified-operators-kgdl9" Mar 20 15:56:40 crc kubenswrapper[4730]: I0320 15:56:40.416381 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2133f9df-adc5-426d-87eb-b229d518b130-catalog-content\") pod \"certified-operators-kgdl9\" (UID: \"2133f9df-adc5-426d-87eb-b229d518b130\") " pod="openshift-marketplace/certified-operators-kgdl9" Mar 20 15:56:40 crc kubenswrapper[4730]: I0320 15:56:40.439806 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfpff\" (UniqueName: \"kubernetes.io/projected/2133f9df-adc5-426d-87eb-b229d518b130-kube-api-access-qfpff\") pod \"certified-operators-kgdl9\" (UID: \"2133f9df-adc5-426d-87eb-b229d518b130\") " pod="openshift-marketplace/certified-operators-kgdl9" Mar 20 15:56:40 crc kubenswrapper[4730]: I0320 15:56:40.543936 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kgdl9" Mar 20 15:56:40 crc kubenswrapper[4730]: I0320 15:56:40.797217 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kgdl9"] Mar 20 15:56:41 crc kubenswrapper[4730]: I0320 15:56:41.376934 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kgdl9" event={"ID":"2133f9df-adc5-426d-87eb-b229d518b130","Type":"ContainerStarted","Data":"eb5c3ed708196e8c37c652b11aa1ad313db538615cda523b4337b38046c73b0f"} Mar 20 15:56:42 crc kubenswrapper[4730]: I0320 15:56:42.383814 4730 generic.go:334] "Generic (PLEG): container finished" podID="2133f9df-adc5-426d-87eb-b229d518b130" containerID="48b55984ef597f25366eb00045958af43e53d6257da11c9de40c0685dc1b9bca" exitCode=0 Mar 20 15:56:42 crc kubenswrapper[4730]: I0320 15:56:42.383861 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kgdl9" event={"ID":"2133f9df-adc5-426d-87eb-b229d518b130","Type":"ContainerDied","Data":"48b55984ef597f25366eb00045958af43e53d6257da11c9de40c0685dc1b9bca"} Mar 20 15:56:43 crc kubenswrapper[4730]: I0320 15:56:43.394339 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kgdl9" event={"ID":"2133f9df-adc5-426d-87eb-b229d518b130","Type":"ContainerStarted","Data":"f47a9232cf9f18f060d1f87bcd94f5bf265453d693e22fe8ffad53c140c9a15d"} Mar 20 15:56:44 crc kubenswrapper[4730]: I0320 15:56:44.415333 4730 generic.go:334] "Generic (PLEG): container finished" podID="2133f9df-adc5-426d-87eb-b229d518b130" containerID="f47a9232cf9f18f060d1f87bcd94f5bf265453d693e22fe8ffad53c140c9a15d" exitCode=0 Mar 20 15:56:44 crc kubenswrapper[4730]: I0320 15:56:44.415562 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kgdl9" event={"ID":"2133f9df-adc5-426d-87eb-b229d518b130","Type":"ContainerDied","Data":"f47a9232cf9f18f060d1f87bcd94f5bf265453d693e22fe8ffad53c140c9a15d"} Mar 20 15:56:45 crc kubenswrapper[4730]: I0320 15:56:45.425290 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kgdl9" event={"ID":"2133f9df-adc5-426d-87eb-b229d518b130","Type":"ContainerStarted","Data":"e30d27703acd54f286baa7767ed6b870660616b2cea2e85eb4333f75a1d87f96"} Mar 20 15:56:45 crc kubenswrapper[4730]: I0320 15:56:45.446086 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kgdl9" podStartSLOduration=2.960585725 podStartE2EDuration="5.446066691s" podCreationTimestamp="2026-03-20 15:56:40 +0000 UTC" firstStartedPulling="2026-03-20 15:56:42.385415966 +0000 UTC m=+1061.598787335" lastFinishedPulling="2026-03-20 15:56:44.870896932 +0000 UTC m=+1064.084268301" observedRunningTime="2026-03-20 15:56:45.445594898 +0000 UTC m=+1064.658966267" watchObservedRunningTime="2026-03-20 15:56:45.446066691 +0000 UTC m=+1064.659438060" Mar 20 15:56:50 crc kubenswrapper[4730]: I0320 15:56:50.544877 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kgdl9" Mar 20 15:56:50 crc kubenswrapper[4730]: I0320 15:56:50.545324 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kgdl9" Mar 20 15:56:50 crc kubenswrapper[4730]: I0320 15:56:50.646698 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kgdl9" Mar 20 15:56:51 crc kubenswrapper[4730]: I0320 15:56:51.513182 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kgdl9" Mar 20 15:56:51 crc kubenswrapper[4730]: I0320 15:56:51.566219 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kgdl9"] Mar 20 15:56:53 crc kubenswrapper[4730]: I0320 15:56:53.482843 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kgdl9" podUID="2133f9df-adc5-426d-87eb-b229d518b130" containerName="registry-server" containerID="cri-o://e30d27703acd54f286baa7767ed6b870660616b2cea2e85eb4333f75a1d87f96" gracePeriod=2 Mar 20 15:56:53 crc kubenswrapper[4730]: E0320 15:56:53.610368 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2133f9df_adc5_426d_87eb_b229d518b130.slice/crio-e30d27703acd54f286baa7767ed6b870660616b2cea2e85eb4333f75a1d87f96.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2133f9df_adc5_426d_87eb_b229d518b130.slice/crio-conmon-e30d27703acd54f286baa7767ed6b870660616b2cea2e85eb4333f75a1d87f96.scope\": RecentStats: unable to find data in memory cache]" Mar 20 15:56:54 crc kubenswrapper[4730]: I0320 15:56:54.015515 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kgdl9" Mar 20 15:56:54 crc kubenswrapper[4730]: I0320 15:56:54.116068 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2133f9df-adc5-426d-87eb-b229d518b130-catalog-content\") pod \"2133f9df-adc5-426d-87eb-b229d518b130\" (UID: \"2133f9df-adc5-426d-87eb-b229d518b130\") " Mar 20 15:56:54 crc kubenswrapper[4730]: I0320 15:56:54.116177 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2133f9df-adc5-426d-87eb-b229d518b130-utilities\") pod \"2133f9df-adc5-426d-87eb-b229d518b130\" (UID: \"2133f9df-adc5-426d-87eb-b229d518b130\") " Mar 20 15:56:54 crc kubenswrapper[4730]: I0320 15:56:54.116200 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfpff\" (UniqueName: \"kubernetes.io/projected/2133f9df-adc5-426d-87eb-b229d518b130-kube-api-access-qfpff\") pod \"2133f9df-adc5-426d-87eb-b229d518b130\" (UID: \"2133f9df-adc5-426d-87eb-b229d518b130\") " Mar 20 15:56:54 crc kubenswrapper[4730]: I0320 15:56:54.117152 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2133f9df-adc5-426d-87eb-b229d518b130-utilities" (OuterVolumeSpecName: "utilities") pod "2133f9df-adc5-426d-87eb-b229d518b130" (UID: "2133f9df-adc5-426d-87eb-b229d518b130"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:56:54 crc kubenswrapper[4730]: I0320 15:56:54.125752 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2133f9df-adc5-426d-87eb-b229d518b130-kube-api-access-qfpff" (OuterVolumeSpecName: "kube-api-access-qfpff") pod "2133f9df-adc5-426d-87eb-b229d518b130" (UID: "2133f9df-adc5-426d-87eb-b229d518b130"). InnerVolumeSpecName "kube-api-access-qfpff". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:56:54 crc kubenswrapper[4730]: I0320 15:56:54.217352 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2133f9df-adc5-426d-87eb-b229d518b130-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 15:56:54 crc kubenswrapper[4730]: I0320 15:56:54.217384 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfpff\" (UniqueName: \"kubernetes.io/projected/2133f9df-adc5-426d-87eb-b229d518b130-kube-api-access-qfpff\") on node \"crc\" DevicePath \"\"" Mar 20 15:56:54 crc kubenswrapper[4730]: I0320 15:56:54.489972 4730 generic.go:334] "Generic (PLEG): container finished" podID="2133f9df-adc5-426d-87eb-b229d518b130" containerID="e30d27703acd54f286baa7767ed6b870660616b2cea2e85eb4333f75a1d87f96" exitCode=0 Mar 20 15:56:54 crc kubenswrapper[4730]: I0320 15:56:54.490014 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kgdl9" event={"ID":"2133f9df-adc5-426d-87eb-b229d518b130","Type":"ContainerDied","Data":"e30d27703acd54f286baa7767ed6b870660616b2cea2e85eb4333f75a1d87f96"} Mar 20 15:56:54 crc kubenswrapper[4730]: I0320 15:56:54.490065 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kgdl9" event={"ID":"2133f9df-adc5-426d-87eb-b229d518b130","Type":"ContainerDied","Data":"eb5c3ed708196e8c37c652b11aa1ad313db538615cda523b4337b38046c73b0f"} Mar 20 15:56:54 crc kubenswrapper[4730]: I0320 15:56:54.490064 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kgdl9" Mar 20 15:56:54 crc kubenswrapper[4730]: I0320 15:56:54.490087 4730 scope.go:117] "RemoveContainer" containerID="e30d27703acd54f286baa7767ed6b870660616b2cea2e85eb4333f75a1d87f96" Mar 20 15:56:54 crc kubenswrapper[4730]: I0320 15:56:54.508102 4730 scope.go:117] "RemoveContainer" containerID="f47a9232cf9f18f060d1f87bcd94f5bf265453d693e22fe8ffad53c140c9a15d" Mar 20 15:56:54 crc kubenswrapper[4730]: I0320 15:56:54.523679 4730 scope.go:117] "RemoveContainer" containerID="48b55984ef597f25366eb00045958af43e53d6257da11c9de40c0685dc1b9bca" Mar 20 15:56:54 crc kubenswrapper[4730]: I0320 15:56:54.545448 4730 scope.go:117] "RemoveContainer" containerID="e30d27703acd54f286baa7767ed6b870660616b2cea2e85eb4333f75a1d87f96" Mar 20 15:56:54 crc kubenswrapper[4730]: E0320 15:56:54.546849 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e30d27703acd54f286baa7767ed6b870660616b2cea2e85eb4333f75a1d87f96\": container with ID starting with e30d27703acd54f286baa7767ed6b870660616b2cea2e85eb4333f75a1d87f96 not found: ID does not exist" containerID="e30d27703acd54f286baa7767ed6b870660616b2cea2e85eb4333f75a1d87f96" Mar 20 15:56:54 crc kubenswrapper[4730]: I0320 15:56:54.546909 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e30d27703acd54f286baa7767ed6b870660616b2cea2e85eb4333f75a1d87f96"} err="failed to get container status \"e30d27703acd54f286baa7767ed6b870660616b2cea2e85eb4333f75a1d87f96\": rpc error: code = NotFound desc = could not find container \"e30d27703acd54f286baa7767ed6b870660616b2cea2e85eb4333f75a1d87f96\": container with ID starting with e30d27703acd54f286baa7767ed6b870660616b2cea2e85eb4333f75a1d87f96 not found: ID does not exist" Mar 20 15:56:54 crc kubenswrapper[4730]: I0320 15:56:54.546944 4730 scope.go:117] "RemoveContainer" containerID="f47a9232cf9f18f060d1f87bcd94f5bf265453d693e22fe8ffad53c140c9a15d" Mar 20 15:56:54 crc kubenswrapper[4730]: E0320 15:56:54.547411 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f47a9232cf9f18f060d1f87bcd94f5bf265453d693e22fe8ffad53c140c9a15d\": container with ID starting with f47a9232cf9f18f060d1f87bcd94f5bf265453d693e22fe8ffad53c140c9a15d not found: ID does not exist" containerID="f47a9232cf9f18f060d1f87bcd94f5bf265453d693e22fe8ffad53c140c9a15d" Mar 20 15:56:54 crc kubenswrapper[4730]: I0320 15:56:54.547447 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f47a9232cf9f18f060d1f87bcd94f5bf265453d693e22fe8ffad53c140c9a15d"} err="failed to get container status \"f47a9232cf9f18f060d1f87bcd94f5bf265453d693e22fe8ffad53c140c9a15d\": rpc error: code = NotFound desc = could not find container \"f47a9232cf9f18f060d1f87bcd94f5bf265453d693e22fe8ffad53c140c9a15d\": container with ID starting with f47a9232cf9f18f060d1f87bcd94f5bf265453d693e22fe8ffad53c140c9a15d not found: ID does not exist" Mar 20 15:56:54 crc kubenswrapper[4730]: I0320 15:56:54.547472 4730 scope.go:117] "RemoveContainer" containerID="48b55984ef597f25366eb00045958af43e53d6257da11c9de40c0685dc1b9bca" Mar 20 15:56:54 crc kubenswrapper[4730]: E0320 15:56:54.547831 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48b55984ef597f25366eb00045958af43e53d6257da11c9de40c0685dc1b9bca\": container with ID starting with 48b55984ef597f25366eb00045958af43e53d6257da11c9de40c0685dc1b9bca not found: ID does not exist" containerID="48b55984ef597f25366eb00045958af43e53d6257da11c9de40c0685dc1b9bca" Mar 20 15:56:54 crc kubenswrapper[4730]: I0320 15:56:54.547855 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48b55984ef597f25366eb00045958af43e53d6257da11c9de40c0685dc1b9bca"} err="failed to get container status \"48b55984ef597f25366eb00045958af43e53d6257da11c9de40c0685dc1b9bca\": rpc error: code = NotFound desc = could not find container \"48b55984ef597f25366eb00045958af43e53d6257da11c9de40c0685dc1b9bca\": container with ID starting with 48b55984ef597f25366eb00045958af43e53d6257da11c9de40c0685dc1b9bca not found: ID does not exist" Mar 20 15:56:54 crc kubenswrapper[4730]: I0320 15:56:54.924068 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2133f9df-adc5-426d-87eb-b229d518b130-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2133f9df-adc5-426d-87eb-b229d518b130" (UID: "2133f9df-adc5-426d-87eb-b229d518b130"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:56:54 crc kubenswrapper[4730]: I0320 15:56:54.929930 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2133f9df-adc5-426d-87eb-b229d518b130-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 15:56:55 crc kubenswrapper[4730]: I0320 15:56:55.127346 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kgdl9"] Mar 20 15:56:55 crc kubenswrapper[4730]: I0320 15:56:55.133391 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kgdl9"] Mar 20 15:56:55 crc kubenswrapper[4730]: I0320 15:56:55.539869 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2133f9df-adc5-426d-87eb-b229d518b130" path="/var/lib/kubelet/pods/2133f9df-adc5-426d-87eb-b229d518b130/volumes" Mar 20 15:56:59 crc kubenswrapper[4730]: I0320 15:56:59.907907 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-wqqnd"] Mar 20 15:56:59 crc kubenswrapper[4730]: E0320 15:56:59.908190 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2133f9df-adc5-426d-87eb-b229d518b130" containerName="extract-utilities" Mar 20 15:56:59 crc kubenswrapper[4730]: I0320 15:56:59.908206 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="2133f9df-adc5-426d-87eb-b229d518b130" containerName="extract-utilities" Mar 20 15:56:59 crc kubenswrapper[4730]: E0320 15:56:59.908226 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2133f9df-adc5-426d-87eb-b229d518b130" containerName="extract-content" Mar 20 15:56:59 crc kubenswrapper[4730]: I0320 15:56:59.908234 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="2133f9df-adc5-426d-87eb-b229d518b130" containerName="extract-content" Mar 20 15:56:59 crc kubenswrapper[4730]: E0320 15:56:59.908264 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2133f9df-adc5-426d-87eb-b229d518b130" containerName="registry-server" Mar 20 15:56:59 crc kubenswrapper[4730]: I0320 15:56:59.908273 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="2133f9df-adc5-426d-87eb-b229d518b130" containerName="registry-server" Mar 20 15:56:59 crc kubenswrapper[4730]: I0320 15:56:59.908430 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="2133f9df-adc5-426d-87eb-b229d518b130" containerName="registry-server" Mar 20 15:56:59 crc kubenswrapper[4730]: I0320 15:56:59.908936 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-wqqnd" Mar 20 15:56:59 crc kubenswrapper[4730]: I0320 15:56:59.910995 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-jtrs9" Mar 20 15:56:59 crc kubenswrapper[4730]: I0320 15:56:59.913370 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-dmd8z"] Mar 20 15:56:59 crc kubenswrapper[4730]: I0320 15:56:59.914278 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-dmd8z" Mar 20 15:56:59 crc kubenswrapper[4730]: I0320 15:56:59.920813 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-tph25" Mar 20 15:56:59 crc kubenswrapper[4730]: I0320 15:56:59.921424 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-wqqnd"] Mar 20 15:56:59 crc kubenswrapper[4730]: I0320 15:56:59.925998 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-dmd8z"] Mar 20 15:56:59 crc kubenswrapper[4730]: I0320 15:56:59.936355 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-nwwzc"] Mar 20 15:56:59 crc kubenswrapper[4730]: I0320 15:56:59.937182 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nwwzc" Mar 20 15:56:59 crc kubenswrapper[4730]: I0320 15:56:59.943827 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-llp6b"] Mar 20 15:56:59 crc kubenswrapper[4730]: I0320 15:56:59.946023 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-llp6b" Mar 20 15:56:59 crc kubenswrapper[4730]: I0320 15:56:59.946617 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-lf45j" Mar 20 15:56:59 crc kubenswrapper[4730]: I0320 15:56:59.947776 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-22gcs" Mar 20 15:56:59 crc kubenswrapper[4730]: I0320 15:56:59.962625 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-nwwzc"] Mar 20 15:56:59 crc kubenswrapper[4730]: I0320 15:56:59.972026 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-llp6b"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.000322 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-v96m5"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.001185 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-v96m5" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.006863 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-xddkf" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.016298 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-v96m5"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.046948 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-pf8sw"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.047786 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-pf8sw"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.047872 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-pf8sw" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.050477 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-k8gx9" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.066583 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7b9c774f96-4pkr9"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.069002 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-4pkr9" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.078417 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.078684 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-hg8js" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.086197 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-9k6lh"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.087154 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-9k6lh" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.091853 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-rddvd" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.092681 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cgmv\" (UniqueName: \"kubernetes.io/projected/d658514c-f369-4ce2-ad50-d055fd208694-kube-api-access-7cgmv\") pod \"glance-operator-controller-manager-79df6bcc97-llp6b\" (UID: \"d658514c-f369-4ce2-ad50-d055fd208694\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-llp6b" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.092745 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfw6d\" (UniqueName: \"kubernetes.io/projected/e8ad6f56-863f-473b-a4d4-d4f70d9489a4-kube-api-access-nfw6d\") pod \"designate-operator-controller-manager-588d4d986b-nwwzc\" (UID: \"e8ad6f56-863f-473b-a4d4-d4f70d9489a4\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nwwzc" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.092805 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fndw\" (UniqueName: \"kubernetes.io/projected/4fb51ed6-04e3-40db-ab21-eb0fe66442fe-kube-api-access-5fndw\") pod \"barbican-operator-controller-manager-59bc569d95-dmd8z\" (UID: \"4fb51ed6-04e3-40db-ab21-eb0fe66442fe\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-dmd8z" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.092847 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r4ml\" (UniqueName: \"kubernetes.io/projected/c5aaa9e9-aebc-4daa-b7ab-c6064b5a78ef-kube-api-access-6r4ml\") pod \"cinder-operator-controller-manager-8d58dc466-wqqnd\" (UID: \"c5aaa9e9-aebc-4daa-b7ab-c6064b5a78ef\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-wqqnd" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.092884 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74dbz\" (UniqueName: \"kubernetes.io/projected/acffaecc-dd6c-4819-91cf-99c5d0154143-kube-api-access-74dbz\") pod \"heat-operator-controller-manager-67dd5f86f5-v96m5\" (UID: \"acffaecc-dd6c-4819-91cf-99c5d0154143\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-v96m5" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.103827 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7b9c774f96-4pkr9"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.120446 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-9k6lh"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.131059 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-g4kgd"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.131879 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-g4kgd" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.134778 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-2cwsv" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.148370 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-g4kgd"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.160154 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-bqjxs"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.161514 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-55f864c847-bqjxs" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.166826 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-pvtt9" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.169346 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-rnx2d"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.172931 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-rnx2d" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.174746 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-647b7" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.191086 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-rnx2d"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.220195 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-xw6kk"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.220825 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krfh6\" (UniqueName: \"kubernetes.io/projected/24280954-941c-445f-aa52-e360ce544046-kube-api-access-krfh6\") pod \"ironic-operator-controller-manager-6f787dddc9-9k6lh\" (UID: \"24280954-941c-445f-aa52-e360ce544046\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-9k6lh" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.220908 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fndw\" (UniqueName: \"kubernetes.io/projected/4fb51ed6-04e3-40db-ab21-eb0fe66442fe-kube-api-access-5fndw\") pod \"barbican-operator-controller-manager-59bc569d95-dmd8z\" (UID: \"4fb51ed6-04e3-40db-ab21-eb0fe66442fe\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-dmd8z" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.221067 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c48t4\" (UniqueName: \"kubernetes.io/projected/d8b68e41-b53d-4fb3-8a86-0c604cda0e46-kube-api-access-c48t4\") pod \"infra-operator-controller-manager-7b9c774f96-4pkr9\" (UID: \"d8b68e41-b53d-4fb3-8a86-0c604cda0e46\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-4pkr9" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.221221 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r4ml\" (UniqueName: \"kubernetes.io/projected/c5aaa9e9-aebc-4daa-b7ab-c6064b5a78ef-kube-api-access-6r4ml\") pod \"cinder-operator-controller-manager-8d58dc466-wqqnd\" (UID: \"c5aaa9e9-aebc-4daa-b7ab-c6064b5a78ef\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-wqqnd" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.221908 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74dbz\" (UniqueName: \"kubernetes.io/projected/acffaecc-dd6c-4819-91cf-99c5d0154143-kube-api-access-74dbz\") pod \"heat-operator-controller-manager-67dd5f86f5-v96m5\" (UID: \"acffaecc-dd6c-4819-91cf-99c5d0154143\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-v96m5" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.222126 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cgmv\" (UniqueName: \"kubernetes.io/projected/d658514c-f369-4ce2-ad50-d055fd208694-kube-api-access-7cgmv\") pod \"glance-operator-controller-manager-79df6bcc97-llp6b\" (UID: \"d658514c-f369-4ce2-ad50-d055fd208694\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-llp6b" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.222370 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rdf7\" (UniqueName: \"kubernetes.io/projected/f733406e-5258-4cfe-870d-4fb86152363e-kube-api-access-2rdf7\") pod \"horizon-operator-controller-manager-8464cc45fb-pf8sw\" (UID: \"f733406e-5258-4cfe-870d-4fb86152363e\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-pf8sw" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.222578 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfw6d\" (UniqueName: \"kubernetes.io/projected/e8ad6f56-863f-473b-a4d4-d4f70d9489a4-kube-api-access-nfw6d\") pod \"designate-operator-controller-manager-588d4d986b-nwwzc\" (UID: \"e8ad6f56-863f-473b-a4d4-d4f70d9489a4\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nwwzc" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.222737 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d8b68e41-b53d-4fb3-8a86-0c604cda0e46-cert\") pod \"infra-operator-controller-manager-7b9c774f96-4pkr9\" (UID: \"d8b68e41-b53d-4fb3-8a86-0c604cda0e46\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-4pkr9" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.243649 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-767865f676-xw6kk" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.257140 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-7cctn" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.263231 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fndw\" (UniqueName: \"kubernetes.io/projected/4fb51ed6-04e3-40db-ab21-eb0fe66442fe-kube-api-access-5fndw\") pod \"barbican-operator-controller-manager-59bc569d95-dmd8z\" (UID: \"4fb51ed6-04e3-40db-ab21-eb0fe66442fe\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-dmd8z" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.264162 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-dmd8z" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.266936 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74dbz\" (UniqueName: \"kubernetes.io/projected/acffaecc-dd6c-4819-91cf-99c5d0154143-kube-api-access-74dbz\") pod \"heat-operator-controller-manager-67dd5f86f5-v96m5\" (UID: \"acffaecc-dd6c-4819-91cf-99c5d0154143\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-v96m5" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.272854 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r4ml\" (UniqueName: \"kubernetes.io/projected/c5aaa9e9-aebc-4daa-b7ab-c6064b5a78ef-kube-api-access-6r4ml\") pod \"cinder-operator-controller-manager-8d58dc466-wqqnd\" (UID: \"c5aaa9e9-aebc-4daa-b7ab-c6064b5a78ef\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-wqqnd" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.287139 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-bqjxs"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.296677 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfw6d\" (UniqueName: \"kubernetes.io/projected/e8ad6f56-863f-473b-a4d4-d4f70d9489a4-kube-api-access-nfw6d\") pod \"designate-operator-controller-manager-588d4d986b-nwwzc\" (UID: \"e8ad6f56-863f-473b-a4d4-d4f70d9489a4\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nwwzc" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.299126 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-xw6kk"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.301758 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cgmv\" (UniqueName: \"kubernetes.io/projected/d658514c-f369-4ce2-ad50-d055fd208694-kube-api-access-7cgmv\") pod \"glance-operator-controller-manager-79df6bcc97-llp6b\" (UID: \"d658514c-f369-4ce2-ad50-d055fd208694\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-llp6b" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.322752 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-l7v9q"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.324077 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-l7v9q" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.325605 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsqkt\" (UniqueName: \"kubernetes.io/projected/cf3ded14-d81b-4384-93e4-e51cde6a31ec-kube-api-access-vsqkt\") pod \"keystone-operator-controller-manager-768b96df4c-g4kgd\" (UID: \"cf3ded14-d81b-4384-93e4-e51cde6a31ec\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-g4kgd" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.325674 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krfh6\" (UniqueName: \"kubernetes.io/projected/24280954-941c-445f-aa52-e360ce544046-kube-api-access-krfh6\") pod \"ironic-operator-controller-manager-6f787dddc9-9k6lh\" (UID: \"24280954-941c-445f-aa52-e360ce544046\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-9k6lh" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.325711 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kv24\" (UniqueName: \"kubernetes.io/projected/87b37583-ab1d-4f9e-98e9-8cb9bdcc5165-kube-api-access-8kv24\") pod \"manila-operator-controller-manager-55f864c847-bqjxs\" (UID: \"87b37583-ab1d-4f9e-98e9-8cb9bdcc5165\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-bqjxs" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.325733 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvkw4\" (UniqueName: \"kubernetes.io/projected/19a5ba3c-9f89-43f6-bd55-6998df2e3533-kube-api-access-tvkw4\") pod \"mariadb-operator-controller-manager-67ccfc9778-rnx2d\" (UID: \"19a5ba3c-9f89-43f6-bd55-6998df2e3533\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-rnx2d" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.325769 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c48t4\" (UniqueName: \"kubernetes.io/projected/d8b68e41-b53d-4fb3-8a86-0c604cda0e46-kube-api-access-c48t4\") pod \"infra-operator-controller-manager-7b9c774f96-4pkr9\" (UID: \"d8b68e41-b53d-4fb3-8a86-0c604cda0e46\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-4pkr9" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.325834 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rdf7\" (UniqueName: \"kubernetes.io/projected/f733406e-5258-4cfe-870d-4fb86152363e-kube-api-access-2rdf7\") pod \"horizon-operator-controller-manager-8464cc45fb-pf8sw\" (UID: \"f733406e-5258-4cfe-870d-4fb86152363e\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-pf8sw" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.325868 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d8b68e41-b53d-4fb3-8a86-0c604cda0e46-cert\") pod \"infra-operator-controller-manager-7b9c774f96-4pkr9\" (UID: \"d8b68e41-b53d-4fb3-8a86-0c604cda0e46\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-4pkr9" Mar 20 15:57:00 crc kubenswrapper[4730]: E0320 15:57:00.326975 4730 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 20 15:57:00 crc kubenswrapper[4730]: E0320 15:57:00.327049 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8b68e41-b53d-4fb3-8a86-0c604cda0e46-cert podName:d8b68e41-b53d-4fb3-8a86-0c604cda0e46 nodeName:}" failed. No retries permitted until 2026-03-20 15:57:00.82702877 +0000 UTC m=+1080.040400139 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d8b68e41-b53d-4fb3-8a86-0c604cda0e46-cert") pod "infra-operator-controller-manager-7b9c774f96-4pkr9" (UID: "d8b68e41-b53d-4fb3-8a86-0c604cda0e46") : secret "infra-operator-webhook-server-cert" not found Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.328858 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-v96m5" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.336065 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-w8x5z"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.336938 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-w8x5z" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.343133 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-l7v9q"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.343881 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-lk442" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.345993 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-7d5bf" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.347706 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krfh6\" (UniqueName: \"kubernetes.io/projected/24280954-941c-445f-aa52-e360ce544046-kube-api-access-krfh6\") pod \"ironic-operator-controller-manager-6f787dddc9-9k6lh\" (UID: \"24280954-941c-445f-aa52-e360ce544046\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-9k6lh" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.347767 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-w8x5z"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.366893 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c48t4\" (UniqueName: \"kubernetes.io/projected/d8b68e41-b53d-4fb3-8a86-0c604cda0e46-kube-api-access-c48t4\") pod \"infra-operator-controller-manager-7b9c774f96-4pkr9\" (UID: \"d8b68e41-b53d-4fb3-8a86-0c604cda0e46\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-4pkr9" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.374214 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rdf7\" (UniqueName: \"kubernetes.io/projected/f733406e-5258-4cfe-870d-4fb86152363e-kube-api-access-2rdf7\") pod \"horizon-operator-controller-manager-8464cc45fb-pf8sw\" (UID: \"f733406e-5258-4cfe-870d-4fb86152363e\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-pf8sw" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.378981 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-pf8sw" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.383203 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-f8l2x"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.384424 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-f8l2x" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.388140 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.391043 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-57nkr" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.400909 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-t7kkm"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.402059 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-884679f54-t7kkm" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.405956 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-g8rlc" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.416753 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-lt49w"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.417876 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5784578c99-lt49w" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.420420 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-qsg4k" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.424531 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-9k6lh" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.433210 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-t7kkm"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.428135 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzrlg\" (UniqueName: \"kubernetes.io/projected/61755ffd-de91-4a38-a174-fe1a4c57dfd0-kube-api-access-zzrlg\") pod \"neutron-operator-controller-manager-767865f676-xw6kk\" (UID: \"61755ffd-de91-4a38-a174-fe1a4c57dfd0\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-xw6kk" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.434911 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsqkt\" (UniqueName: \"kubernetes.io/projected/cf3ded14-d81b-4384-93e4-e51cde6a31ec-kube-api-access-vsqkt\") pod \"keystone-operator-controller-manager-768b96df4c-g4kgd\" (UID: \"cf3ded14-d81b-4384-93e4-e51cde6a31ec\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-g4kgd" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.434978 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kv24\" (UniqueName: \"kubernetes.io/projected/87b37583-ab1d-4f9e-98e9-8cb9bdcc5165-kube-api-access-8kv24\") pod \"manila-operator-controller-manager-55f864c847-bqjxs\" (UID: \"87b37583-ab1d-4f9e-98e9-8cb9bdcc5165\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-bqjxs" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.434996 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2gcf\" (UniqueName: \"kubernetes.io/projected/36dd23cb-43b2-4c25-9e24-3e2f69f93eff-kube-api-access-l2gcf\") pod \"nova-operator-controller-manager-5d488d59fb-l7v9q\" (UID: \"36dd23cb-43b2-4c25-9e24-3e2f69f93eff\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-l7v9q" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.435017 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvkw4\" (UniqueName: \"kubernetes.io/projected/19a5ba3c-9f89-43f6-bd55-6998df2e3533-kube-api-access-tvkw4\") pod \"mariadb-operator-controller-manager-67ccfc9778-rnx2d\" (UID: \"19a5ba3c-9f89-43f6-bd55-6998df2e3533\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-rnx2d" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.435131 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkwmj\" (UniqueName: \"kubernetes.io/projected/d7ad408f-56db-4b5b-bea9-ba821eae2b80-kube-api-access-kkwmj\") pod \"octavia-operator-controller-manager-5b9f45d989-w8x5z\" (UID: \"d7ad408f-56db-4b5b-bea9-ba821eae2b80\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-w8x5z" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.455219 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-lt49w"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.460569 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kv24\" (UniqueName: \"kubernetes.io/projected/87b37583-ab1d-4f9e-98e9-8cb9bdcc5165-kube-api-access-8kv24\") pod \"manila-operator-controller-manager-55f864c847-bqjxs\" (UID: \"87b37583-ab1d-4f9e-98e9-8cb9bdcc5165\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-bqjxs" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.460640 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-6f2w8"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.461601 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-c674c5965-6f2w8" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.463004 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-4znx8" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.464987 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvkw4\" (UniqueName: \"kubernetes.io/projected/19a5ba3c-9f89-43f6-bd55-6998df2e3533-kube-api-access-tvkw4\") pod \"mariadb-operator-controller-manager-67ccfc9778-rnx2d\" (UID: \"19a5ba3c-9f89-43f6-bd55-6998df2e3533\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-rnx2d" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.468027 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsqkt\" (UniqueName: \"kubernetes.io/projected/cf3ded14-d81b-4384-93e4-e51cde6a31ec-kube-api-access-vsqkt\") pod \"keystone-operator-controller-manager-768b96df4c-g4kgd\" (UID: \"cf3ded14-d81b-4384-93e4-e51cde6a31ec\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-g4kgd" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.475063 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-f8l2x"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.488466 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-55f864c847-bqjxs" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.510963 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-6f2w8"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.522153 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-rnx2d" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.527358 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6b694c5-lrpjm"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.528204 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-lrpjm" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.530583 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-gbr7k" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.536151 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkwmj\" (UniqueName: \"kubernetes.io/projected/d7ad408f-56db-4b5b-bea9-ba821eae2b80-kube-api-access-kkwmj\") pod \"octavia-operator-controller-manager-5b9f45d989-w8x5z\" (UID: \"d7ad408f-56db-4b5b-bea9-ba821eae2b80\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-w8x5z" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.536191 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpdpg\" (UniqueName: \"kubernetes.io/projected/6944c865-92a4-441c-907b-27424898cb99-kube-api-access-rpdpg\") pod \"ovn-operator-controller-manager-884679f54-t7kkm\" (UID: \"6944c865-92a4-441c-907b-27424898cb99\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-t7kkm" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.536229 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8f74be61-d309-417c-90a3-2962b57071c4-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-f8l2x\" (UID: \"8f74be61-d309-417c-90a3-2962b57071c4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-f8l2x" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.536270 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzrlg\" (UniqueName: \"kubernetes.io/projected/61755ffd-de91-4a38-a174-fe1a4c57dfd0-kube-api-access-zzrlg\") pod \"neutron-operator-controller-manager-767865f676-xw6kk\" (UID: \"61755ffd-de91-4a38-a174-fe1a4c57dfd0\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-xw6kk" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.536303 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm4mm\" (UniqueName: \"kubernetes.io/projected/9944d85d-4f1c-4312-ac57-49ee75a8fd16-kube-api-access-xm4mm\") pod \"placement-operator-controller-manager-5784578c99-lt49w\" (UID: \"9944d85d-4f1c-4312-ac57-49ee75a8fd16\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-lt49w" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.536332 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2gcf\" (UniqueName: \"kubernetes.io/projected/36dd23cb-43b2-4c25-9e24-3e2f69f93eff-kube-api-access-l2gcf\") pod \"nova-operator-controller-manager-5d488d59fb-l7v9q\" (UID: \"36dd23cb-43b2-4c25-9e24-3e2f69f93eff\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-l7v9q" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.536374 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wrrk\" (UniqueName: \"kubernetes.io/projected/8f74be61-d309-417c-90a3-2962b57071c4-kube-api-access-7wrrk\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-f8l2x\" (UID: \"8f74be61-d309-417c-90a3-2962b57071c4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-f8l2x" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.536917 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-wqqnd" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.542920 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6b694c5-lrpjm"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.569199 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bm7hr"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.572579 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bm7hr" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.573223 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nwwzc" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.578700 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-2nljx" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.578839 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2gcf\" (UniqueName: \"kubernetes.io/projected/36dd23cb-43b2-4c25-9e24-3e2f69f93eff-kube-api-access-l2gcf\") pod \"nova-operator-controller-manager-5d488d59fb-l7v9q\" (UID: \"36dd23cb-43b2-4c25-9e24-3e2f69f93eff\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-l7v9q" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.579816 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bm7hr"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.580561 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzrlg\" (UniqueName: \"kubernetes.io/projected/61755ffd-de91-4a38-a174-fe1a4c57dfd0-kube-api-access-zzrlg\") pod \"neutron-operator-controller-manager-767865f676-xw6kk\" (UID: \"61755ffd-de91-4a38-a174-fe1a4c57dfd0\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-xw6kk" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.583532 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-llp6b" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.593952 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkwmj\" (UniqueName: \"kubernetes.io/projected/d7ad408f-56db-4b5b-bea9-ba821eae2b80-kube-api-access-kkwmj\") pod \"octavia-operator-controller-manager-5b9f45d989-w8x5z\" (UID: \"d7ad408f-56db-4b5b-bea9-ba821eae2b80\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-w8x5z" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.637890 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wrrk\" (UniqueName: \"kubernetes.io/projected/8f74be61-d309-417c-90a3-2962b57071c4-kube-api-access-7wrrk\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-f8l2x\" (UID: \"8f74be61-d309-417c-90a3-2962b57071c4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-f8l2x" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.637979 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpdpg\" (UniqueName: \"kubernetes.io/projected/6944c865-92a4-441c-907b-27424898cb99-kube-api-access-rpdpg\") pod \"ovn-operator-controller-manager-884679f54-t7kkm\" (UID: \"6944c865-92a4-441c-907b-27424898cb99\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-t7kkm" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.638044 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8f74be61-d309-417c-90a3-2962b57071c4-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-f8l2x\" (UID: \"8f74be61-d309-417c-90a3-2962b57071c4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-f8l2x" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.638147 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm4mm\" (UniqueName: \"kubernetes.io/projected/9944d85d-4f1c-4312-ac57-49ee75a8fd16-kube-api-access-xm4mm\") pod \"placement-operator-controller-manager-5784578c99-lt49w\" (UID: \"9944d85d-4f1c-4312-ac57-49ee75a8fd16\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-lt49w" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.638181 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jrvs\" (UniqueName: \"kubernetes.io/projected/db4a9305-eefd-4804-ac7a-4d811bd928f5-kube-api-access-5jrvs\") pod \"swift-operator-controller-manager-c674c5965-6f2w8\" (UID: \"db4a9305-eefd-4804-ac7a-4d811bd928f5\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-6f2w8" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.638239 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5h4l\" (UniqueName: \"kubernetes.io/projected/cdbd62c8-9960-4257-87d9-d4923c7ef8dd-kube-api-access-d5h4l\") pod \"telemetry-operator-controller-manager-d6b694c5-lrpjm\" (UID: \"cdbd62c8-9960-4257-87d9-d4923c7ef8dd\") " pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-lrpjm" Mar 20 15:57:00 crc kubenswrapper[4730]: E0320 15:57:00.638827 4730 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 15:57:00 crc kubenswrapper[4730]: E0320 15:57:00.638881 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f74be61-d309-417c-90a3-2962b57071c4-cert podName:8f74be61-d309-417c-90a3-2962b57071c4 nodeName:}" failed. No retries permitted until 2026-03-20 15:57:01.138864376 +0000 UTC m=+1080.352235745 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8f74be61-d309-417c-90a3-2962b57071c4-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-f8l2x" (UID: "8f74be61-d309-417c-90a3-2962b57071c4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.676074 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wrrk\" (UniqueName: \"kubernetes.io/projected/8f74be61-d309-417c-90a3-2962b57071c4-kube-api-access-7wrrk\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-f8l2x\" (UID: \"8f74be61-d309-417c-90a3-2962b57071c4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-f8l2x" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.679051 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpdpg\" (UniqueName: \"kubernetes.io/projected/6944c865-92a4-441c-907b-27424898cb99-kube-api-access-rpdpg\") pod \"ovn-operator-controller-manager-884679f54-t7kkm\" (UID: \"6944c865-92a4-441c-907b-27424898cb99\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-t7kkm" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.691163 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm4mm\" (UniqueName: \"kubernetes.io/projected/9944d85d-4f1c-4312-ac57-49ee75a8fd16-kube-api-access-xm4mm\") pod \"placement-operator-controller-manager-5784578c99-lt49w\" (UID: \"9944d85d-4f1c-4312-ac57-49ee75a8fd16\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-lt49w" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.722053 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c5858c67b-cfmtk"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.723379 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c5858c67b-cfmtk" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.730827 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-767865f676-xw6kk" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.743869 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jrvs\" (UniqueName: \"kubernetes.io/projected/db4a9305-eefd-4804-ac7a-4d811bd928f5-kube-api-access-5jrvs\") pod \"swift-operator-controller-manager-c674c5965-6f2w8\" (UID: \"db4a9305-eefd-4804-ac7a-4d811bd928f5\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-6f2w8" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.743913 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5h4l\" (UniqueName: \"kubernetes.io/projected/cdbd62c8-9960-4257-87d9-d4923c7ef8dd-kube-api-access-d5h4l\") pod \"telemetry-operator-controller-manager-d6b694c5-lrpjm\" (UID: \"cdbd62c8-9960-4257-87d9-d4923c7ef8dd\") " pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-lrpjm" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.743947 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkvtm\" (UniqueName: \"kubernetes.io/projected/92c29eff-b9ab-4420-86c6-6b388cfc87af-kube-api-access-tkvtm\") pod \"test-operator-controller-manager-5c5cb9c4d7-bm7hr\" (UID: \"92c29eff-b9ab-4420-86c6-6b388cfc87af\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bm7hr" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.754381 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c5858c67b-cfmtk"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.754641 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-g4kgd" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.757352 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-n27l7" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.785991 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-l7v9q" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.822815 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jrvs\" (UniqueName: \"kubernetes.io/projected/db4a9305-eefd-4804-ac7a-4d811bd928f5-kube-api-access-5jrvs\") pod \"swift-operator-controller-manager-c674c5965-6f2w8\" (UID: \"db4a9305-eefd-4804-ac7a-4d811bd928f5\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-6f2w8" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.846166 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.847774 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.858485 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w6bk\" (UniqueName: \"kubernetes.io/projected/f00b4813-358d-49c4-bf9d-486e35f5a94f-kube-api-access-7w6bk\") pod \"watcher-operator-controller-manager-6c5858c67b-cfmtk\" (UID: \"f00b4813-358d-49c4-bf9d-486e35f5a94f\") " pod="openstack-operators/watcher-operator-controller-manager-6c5858c67b-cfmtk" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.858593 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d8b68e41-b53d-4fb3-8a86-0c604cda0e46-cert\") pod \"infra-operator-controller-manager-7b9c774f96-4pkr9\" (UID: \"d8b68e41-b53d-4fb3-8a86-0c604cda0e46\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-4pkr9" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.858660 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkvtm\" (UniqueName: \"kubernetes.io/projected/92c29eff-b9ab-4420-86c6-6b388cfc87af-kube-api-access-tkvtm\") pod \"test-operator-controller-manager-5c5cb9c4d7-bm7hr\" (UID: \"92c29eff-b9ab-4420-86c6-6b388cfc87af\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bm7hr" Mar 20 15:57:00 crc kubenswrapper[4730]: E0320 15:57:00.859659 4730 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 20 15:57:00 crc kubenswrapper[4730]: E0320 15:57:00.859730 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8b68e41-b53d-4fb3-8a86-0c604cda0e46-cert podName:d8b68e41-b53d-4fb3-8a86-0c604cda0e46 nodeName:}" failed. No retries permitted until 2026-03-20 15:57:01.859711007 +0000 UTC m=+1081.073082376 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d8b68e41-b53d-4fb3-8a86-0c604cda0e46-cert") pod "infra-operator-controller-manager-7b9c774f96-4pkr9" (UID: "d8b68e41-b53d-4fb3-8a86-0c604cda0e46") : secret "infra-operator-webhook-server-cert" not found Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.862919 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.869870 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-w8x5z" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.870328 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-nctfr" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.878851 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-884679f54-t7kkm" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.870361 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.859672 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5h4l\" (UniqueName: \"kubernetes.io/projected/cdbd62c8-9960-4257-87d9-d4923c7ef8dd-kube-api-access-d5h4l\") pod \"telemetry-operator-controller-manager-d6b694c5-lrpjm\" (UID: \"cdbd62c8-9960-4257-87d9-d4923c7ef8dd\") " pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-lrpjm" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.909921 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkvtm\" (UniqueName: \"kubernetes.io/projected/92c29eff-b9ab-4420-86c6-6b388cfc87af-kube-api-access-tkvtm\") pod \"test-operator-controller-manager-5c5cb9c4d7-bm7hr\" (UID: \"92c29eff-b9ab-4420-86c6-6b388cfc87af\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bm7hr" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.910079 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5784578c99-lt49w" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.944554 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.974390 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-c674c5965-6f2w8" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.975427 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-metrics-certs\") pod \"openstack-operator-controller-manager-6f58c59cbb-76ssq\" (UID: \"c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008\") " pod="openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.975463 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-webhook-certs\") pod \"openstack-operator-controller-manager-6f58c59cbb-76ssq\" (UID: \"c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008\") " pod="openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.975511 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w6bk\" (UniqueName: \"kubernetes.io/projected/f00b4813-358d-49c4-bf9d-486e35f5a94f-kube-api-access-7w6bk\") pod \"watcher-operator-controller-manager-6c5858c67b-cfmtk\" (UID: \"f00b4813-358d-49c4-bf9d-486e35f5a94f\") " pod="openstack-operators/watcher-operator-controller-manager-6c5858c67b-cfmtk" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.975564 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99p9c\" (UniqueName: \"kubernetes.io/projected/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-kube-api-access-99p9c\") pod \"openstack-operator-controller-manager-6f58c59cbb-76ssq\" (UID: \"c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008\") " pod="openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.977513 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mdzv5"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.978419 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mdzv5" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.983724 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mdzv5"] Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.991618 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-lrpjm" Mar 20 15:57:00 crc kubenswrapper[4730]: I0320 15:57:00.994539 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-7j9w6" Mar 20 15:57:01 crc kubenswrapper[4730]: I0320 15:57:00.997371 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-dmd8z"] Mar 20 15:57:01 crc kubenswrapper[4730]: I0320 15:57:01.003748 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w6bk\" (UniqueName: \"kubernetes.io/projected/f00b4813-358d-49c4-bf9d-486e35f5a94f-kube-api-access-7w6bk\") pod \"watcher-operator-controller-manager-6c5858c67b-cfmtk\" (UID: \"f00b4813-358d-49c4-bf9d-486e35f5a94f\") " pod="openstack-operators/watcher-operator-controller-manager-6c5858c67b-cfmtk" Mar 20 15:57:01 crc kubenswrapper[4730]: I0320 15:57:01.022462 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bm7hr" Mar 20 15:57:01 crc kubenswrapper[4730]: I0320 15:57:01.061796 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c5858c67b-cfmtk" Mar 20 15:57:01 crc kubenswrapper[4730]: I0320 15:57:01.079021 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-metrics-certs\") pod \"openstack-operator-controller-manager-6f58c59cbb-76ssq\" (UID: \"c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008\") " pod="openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq" Mar 20 15:57:01 crc kubenswrapper[4730]: I0320 15:57:01.079077 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-webhook-certs\") pod \"openstack-operator-controller-manager-6f58c59cbb-76ssq\" (UID: \"c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008\") " pod="openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq" Mar 20 15:57:01 crc kubenswrapper[4730]: I0320 15:57:01.079161 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz65q\" (UniqueName: \"kubernetes.io/projected/82cae974-2029-42c3-81bf-e9bee167e991-kube-api-access-hz65q\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mdzv5\" (UID: \"82cae974-2029-42c3-81bf-e9bee167e991\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mdzv5" Mar 20 15:57:01 crc kubenswrapper[4730]: I0320 15:57:01.079228 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99p9c\" (UniqueName: \"kubernetes.io/projected/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-kube-api-access-99p9c\") pod \"openstack-operator-controller-manager-6f58c59cbb-76ssq\" (UID: \"c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008\") " pod="openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq" Mar 20 15:57:01 crc kubenswrapper[4730]: E0320 15:57:01.079391 4730 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 20 15:57:01 crc kubenswrapper[4730]: E0320 15:57:01.079455 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-webhook-certs podName:c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008 nodeName:}" failed. No retries permitted until 2026-03-20 15:57:01.579438515 +0000 UTC m=+1080.792809884 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-webhook-certs") pod "openstack-operator-controller-manager-6f58c59cbb-76ssq" (UID: "c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008") : secret "webhook-server-cert" not found Mar 20 15:57:01 crc kubenswrapper[4730]: E0320 15:57:01.079174 4730 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 20 15:57:01 crc kubenswrapper[4730]: E0320 15:57:01.079691 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-metrics-certs podName:c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008 nodeName:}" failed. No retries permitted until 2026-03-20 15:57:01.579670662 +0000 UTC m=+1080.793042031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-metrics-certs") pod "openstack-operator-controller-manager-6f58c59cbb-76ssq" (UID: "c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008") : secret "metrics-server-cert" not found Mar 20 15:57:01 crc kubenswrapper[4730]: I0320 15:57:01.120500 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99p9c\" (UniqueName: \"kubernetes.io/projected/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-kube-api-access-99p9c\") pod \"openstack-operator-controller-manager-6f58c59cbb-76ssq\" (UID: \"c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008\") " pod="openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq" Mar 20 15:57:01 crc kubenswrapper[4730]: I0320 15:57:01.180862 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8f74be61-d309-417c-90a3-2962b57071c4-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-f8l2x\" (UID: \"8f74be61-d309-417c-90a3-2962b57071c4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-f8l2x" Mar 20 15:57:01 crc kubenswrapper[4730]: I0320 15:57:01.180916 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz65q\" (UniqueName: \"kubernetes.io/projected/82cae974-2029-42c3-81bf-e9bee167e991-kube-api-access-hz65q\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mdzv5\" (UID: \"82cae974-2029-42c3-81bf-e9bee167e991\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mdzv5" Mar 20 15:57:01 crc kubenswrapper[4730]: E0320 15:57:01.181358 4730 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 15:57:01 crc kubenswrapper[4730]: E0320 15:57:01.181434 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f74be61-d309-417c-90a3-2962b57071c4-cert podName:8f74be61-d309-417c-90a3-2962b57071c4 nodeName:}" failed. No retries permitted until 2026-03-20 15:57:02.181416495 +0000 UTC m=+1081.394787864 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8f74be61-d309-417c-90a3-2962b57071c4-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-f8l2x" (UID: "8f74be61-d309-417c-90a3-2962b57071c4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 15:57:01 crc kubenswrapper[4730]: I0320 15:57:01.204807 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz65q\" (UniqueName: \"kubernetes.io/projected/82cae974-2029-42c3-81bf-e9bee167e991-kube-api-access-hz65q\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mdzv5\" (UID: \"82cae974-2029-42c3-81bf-e9bee167e991\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mdzv5" Mar 20 15:57:01 crc kubenswrapper[4730]: I0320 15:57:01.239726 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-v96m5"] Mar 20 15:57:01 crc kubenswrapper[4730]: I0320 15:57:01.349958 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-pf8sw"] Mar 20 15:57:01 crc kubenswrapper[4730]: I0320 15:57:01.377808 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mdzv5" Mar 20 15:57:01 crc kubenswrapper[4730]: I0320 15:57:01.584142 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-v96m5" event={"ID":"acffaecc-dd6c-4819-91cf-99c5d0154143","Type":"ContainerStarted","Data":"13d001283395a603cd311d6bbb865c68956f3919996ffd420b77272abed644bc"} Mar 20 15:57:01 crc kubenswrapper[4730]: I0320 15:57:01.584504 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-pf8sw" event={"ID":"f733406e-5258-4cfe-870d-4fb86152363e","Type":"ContainerStarted","Data":"c06c8dafbc5f1431a6c3449ac140f7b0107a38e11283f93df1b4982555086cf7"} Mar 20 15:57:01 crc kubenswrapper[4730]: I0320 15:57:01.585635 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-dmd8z" event={"ID":"4fb51ed6-04e3-40db-ab21-eb0fe66442fe","Type":"ContainerStarted","Data":"d1a9c5ffc6adbb662f6b3059cb05c6bc3d05624788e0fc7aed8e7aca54dff196"} Mar 20 15:57:01 crc kubenswrapper[4730]: I0320 15:57:01.590695 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-metrics-certs\") pod \"openstack-operator-controller-manager-6f58c59cbb-76ssq\" (UID: \"c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008\") " pod="openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq" Mar 20 15:57:01 crc kubenswrapper[4730]: I0320 15:57:01.590738 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-webhook-certs\") pod \"openstack-operator-controller-manager-6f58c59cbb-76ssq\" (UID: \"c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008\") " pod="openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq" Mar 20 15:57:01 crc kubenswrapper[4730]: E0320 15:57:01.590932 4730 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 20 15:57:01 crc kubenswrapper[4730]: E0320 15:57:01.591000 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-webhook-certs podName:c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008 nodeName:}" failed. No retries permitted until 2026-03-20 15:57:02.590961377 +0000 UTC m=+1081.804332746 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-webhook-certs") pod "openstack-operator-controller-manager-6f58c59cbb-76ssq" (UID: "c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008") : secret "webhook-server-cert" not found Mar 20 15:57:01 crc kubenswrapper[4730]: E0320 15:57:01.591073 4730 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 20 15:57:01 crc kubenswrapper[4730]: E0320 15:57:01.591099 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-metrics-certs podName:c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008 nodeName:}" failed. No retries permitted until 2026-03-20 15:57:02.591092361 +0000 UTC m=+1081.804463730 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-metrics-certs") pod "openstack-operator-controller-manager-6f58c59cbb-76ssq" (UID: "c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008") : secret "metrics-server-cert" not found Mar 20 15:57:01 crc kubenswrapper[4730]: I0320 15:57:01.905483 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d8b68e41-b53d-4fb3-8a86-0c604cda0e46-cert\") pod \"infra-operator-controller-manager-7b9c774f96-4pkr9\" (UID: \"d8b68e41-b53d-4fb3-8a86-0c604cda0e46\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-4pkr9" Mar 20 15:57:01 crc kubenswrapper[4730]: E0320 15:57:01.905697 4730 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 20 15:57:01 crc kubenswrapper[4730]: E0320 15:57:01.905752 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8b68e41-b53d-4fb3-8a86-0c604cda0e46-cert podName:d8b68e41-b53d-4fb3-8a86-0c604cda0e46 nodeName:}" failed. No retries permitted until 2026-03-20 15:57:03.905737857 +0000 UTC m=+1083.119109226 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d8b68e41-b53d-4fb3-8a86-0c604cda0e46-cert") pod "infra-operator-controller-manager-7b9c774f96-4pkr9" (UID: "d8b68e41-b53d-4fb3-8a86-0c604cda0e46") : secret "infra-operator-webhook-server-cert" not found Mar 20 15:57:02 crc kubenswrapper[4730]: I0320 15:57:01.997891 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-9k6lh"] Mar 20 15:57:02 crc kubenswrapper[4730]: I0320 15:57:02.017874 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-bqjxs"] Mar 20 15:57:02 crc kubenswrapper[4730]: W0320 15:57:02.029276 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87b37583_ab1d_4f9e_98e9_8cb9bdcc5165.slice/crio-e8d7eff02b8b76372a54743ad9d731405d54eb260e6f785ad1cadc3653509179 WatchSource:0}: Error finding container e8d7eff02b8b76372a54743ad9d731405d54eb260e6f785ad1cadc3653509179: Status 404 returned error can't find the container with id e8d7eff02b8b76372a54743ad9d731405d54eb260e6f785ad1cadc3653509179 Mar 20 15:57:02 crc kubenswrapper[4730]: I0320 15:57:02.227434 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8f74be61-d309-417c-90a3-2962b57071c4-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-f8l2x\" (UID: \"8f74be61-d309-417c-90a3-2962b57071c4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-f8l2x" Mar 20 15:57:02 crc kubenswrapper[4730]: E0320 15:57:02.227617 4730 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 15:57:02 crc kubenswrapper[4730]: E0320 15:57:02.227699 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f74be61-d309-417c-90a3-2962b57071c4-cert podName:8f74be61-d309-417c-90a3-2962b57071c4 nodeName:}" failed. No retries permitted until 2026-03-20 15:57:04.227675062 +0000 UTC m=+1083.441046481 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8f74be61-d309-417c-90a3-2962b57071c4-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-f8l2x" (UID: "8f74be61-d309-417c-90a3-2962b57071c4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 15:57:02 crc kubenswrapper[4730]: I0320 15:57:02.388675 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-t7kkm"] Mar 20 15:57:02 crc kubenswrapper[4730]: I0320 15:57:02.431442 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-xw6kk"] Mar 20 15:57:02 crc kubenswrapper[4730]: I0320 15:57:02.471183 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-g4kgd"] Mar 20 15:57:02 crc kubenswrapper[4730]: I0320 15:57:02.482877 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-rnx2d"] Mar 20 15:57:02 crc kubenswrapper[4730]: I0320 15:57:02.497981 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-llp6b"] Mar 20 15:57:02 crc kubenswrapper[4730]: I0320 15:57:02.520386 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-lt49w"] Mar 20 15:57:02 crc kubenswrapper[4730]: I0320 15:57:02.539740 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6b694c5-lrpjm"] Mar 20 15:57:02 crc kubenswrapper[4730]: I0320 15:57:02.563362 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-nwwzc"] Mar 20 15:57:02 crc kubenswrapper[4730]: I0320 15:57:02.577348 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-l7v9q"] Mar 20 15:57:02 crc kubenswrapper[4730]: I0320 15:57:02.588703 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-6f2w8"] Mar 20 15:57:02 crc kubenswrapper[4730]: I0320 15:57:02.611954 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-w8x5z"] Mar 20 15:57:02 crc kubenswrapper[4730]: I0320 15:57:02.624327 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bm7hr"] Mar 20 15:57:02 crc kubenswrapper[4730]: I0320 15:57:02.624394 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-wqqnd"] Mar 20 15:57:02 crc kubenswrapper[4730]: W0320 15:57:02.630843 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb4a9305_eefd_4804_ac7a_4d811bd928f5.slice/crio-9cb30f7561968d0045bfd9720752e9e4b74aedd5bd67f8abf662c7010c4d21fd WatchSource:0}: Error finding container 9cb30f7561968d0045bfd9720752e9e4b74aedd5bd67f8abf662c7010c4d21fd: Status 404 returned error can't find the container with id 9cb30f7561968d0045bfd9720752e9e4b74aedd5bd67f8abf662c7010c4d21fd Mar 20 15:57:02 crc kubenswrapper[4730]: I0320 15:57:02.633549 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5784578c99-lt49w" event={"ID":"9944d85d-4f1c-4312-ac57-49ee75a8fd16","Type":"ContainerStarted","Data":"e8fdc344e225b28c02f725d643a077fdf7d7f73b111b57f938236e92ee02ba2f"} Mar 20 15:57:02 crc kubenswrapper[4730]: I0320 15:57:02.636338 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c5858c67b-cfmtk"] Mar 20 15:57:02 crc kubenswrapper[4730]: I0320 15:57:02.642161 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-metrics-certs\") pod \"openstack-operator-controller-manager-6f58c59cbb-76ssq\" (UID: \"c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008\") " pod="openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq" Mar 20 15:57:02 crc kubenswrapper[4730]: I0320 15:57:02.642210 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-webhook-certs\") pod \"openstack-operator-controller-manager-6f58c59cbb-76ssq\" (UID: \"c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008\") " pod="openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq" Mar 20 15:57:02 crc kubenswrapper[4730]: E0320 15:57:02.643029 4730 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 20 15:57:02 crc kubenswrapper[4730]: E0320 15:57:02.643108 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-webhook-certs podName:c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008 nodeName:}" failed. No retries permitted until 2026-03-20 15:57:04.643088223 +0000 UTC m=+1083.856459592 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-webhook-certs") pod "openstack-operator-controller-manager-6f58c59cbb-76ssq" (UID: "c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008") : secret "webhook-server-cert" not found Mar 20 15:57:02 crc kubenswrapper[4730]: E0320 15:57:02.643156 4730 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 20 15:57:02 crc kubenswrapper[4730]: E0320 15:57:02.643178 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-metrics-certs podName:c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008 nodeName:}" failed. No retries permitted until 2026-03-20 15:57:04.643170455 +0000 UTC m=+1083.856541824 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-metrics-certs") pod "openstack-operator-controller-manager-6f58c59cbb-76ssq" (UID: "c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008") : secret "metrics-server-cert" not found Mar 20 15:57:02 crc kubenswrapper[4730]: I0320 15:57:02.644150 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-9k6lh" event={"ID":"24280954-941c-445f-aa52-e360ce544046","Type":"ContainerStarted","Data":"6c59770b79346d41c65371ffa59c7bedc5774bd27cf057465db667dfca45ebe3"} Mar 20 15:57:02 crc kubenswrapper[4730]: I0320 15:57:02.694390 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-767865f676-xw6kk" event={"ID":"61755ffd-de91-4a38-a174-fe1a4c57dfd0","Type":"ContainerStarted","Data":"58a5658fce2da71a575534b0605f321b73b382661cbc5abeff444b1dc640bae0"} Mar 20 15:57:02 crc kubenswrapper[4730]: I0320 15:57:02.694749 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mdzv5"] Mar 20 15:57:02 crc kubenswrapper[4730]: E0320 15:57:02.696748 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:12841b27173f5f1beeb83112e057c8753f4cf411f583fba4f0610fac0f60b7ad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nfw6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-588d4d986b-nwwzc_openstack-operators(e8ad6f56-863f-473b-a4d4-d4f70d9489a4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 20 15:57:02 crc kubenswrapper[4730]: E0320 15:57:02.699064 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nwwzc" podUID="e8ad6f56-863f-473b-a4d4-d4f70d9489a4" Mar 20 15:57:02 crc kubenswrapper[4730]: E0320 15:57:02.700939 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:d8210bb21d4d298271a7b43f92fe58789393546e616aaaec1ce71bb2a754e777,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6r4ml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-8d58dc466-wqqnd_openstack-operators(c5aaa9e9-aebc-4daa-b7ab-c6064b5a78ef): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 20 15:57:02 crc kubenswrapper[4730]: E0320 15:57:02.701173 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.147:5001/openstack-k8s-operators/watcher-operator:ee00c2d330b27d46c48ac29a20680b56ca50df3c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7w6bk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6c5858c67b-cfmtk_openstack-operators(f00b4813-358d-49c4-bf9d-486e35f5a94f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 20 15:57:02 crc kubenswrapper[4730]: E0320 15:57:02.704654 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-6c5858c67b-cfmtk" podUID="f00b4813-358d-49c4-bf9d-486e35f5a94f" Mar 20 15:57:02 crc kubenswrapper[4730]: E0320 15:57:02.704697 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-wqqnd" podUID="c5aaa9e9-aebc-4daa-b7ab-c6064b5a78ef" Mar 20 15:57:02 crc kubenswrapper[4730]: I0320 15:57:02.711448 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-g4kgd" event={"ID":"cf3ded14-d81b-4384-93e4-e51cde6a31ec","Type":"ContainerStarted","Data":"bc855432d1a2fe31aa565f0f52bf4a30b404c62e959d0a58ae7bc441629b60c2"} Mar 20 15:57:02 crc kubenswrapper[4730]: I0320 15:57:02.717539 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-884679f54-t7kkm" event={"ID":"6944c865-92a4-441c-907b-27424898cb99","Type":"ContainerStarted","Data":"3034cb64846551ae2909b23f5eb9dcd7905f4b99d08ea71aba65ddfaa5696724"} Mar 20 15:57:02 crc kubenswrapper[4730]: I0320 15:57:02.731758 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-l7v9q" event={"ID":"36dd23cb-43b2-4c25-9e24-3e2f69f93eff","Type":"ContainerStarted","Data":"72490f86974170aaec1959afaeaed04d8cf9dfbe52ed67d0a2ca0df16df2af51"} Mar 20 15:57:02 crc kubenswrapper[4730]: E0320 15:57:02.738469 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hz65q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-mdzv5_openstack-operators(82cae974-2029-42c3-81bf-e9bee167e991): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 20 15:57:02 crc kubenswrapper[4730]: E0320 15:57:02.739556 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mdzv5" podUID="82cae974-2029-42c3-81bf-e9bee167e991" Mar 20 15:57:02 crc kubenswrapper[4730]: I0320 15:57:02.739633 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-55f864c847-bqjxs" event={"ID":"87b37583-ab1d-4f9e-98e9-8cb9bdcc5165","Type":"ContainerStarted","Data":"e8d7eff02b8b76372a54743ad9d731405d54eb260e6f785ad1cadc3653509179"} Mar 20 15:57:03 crc kubenswrapper[4730]: I0320 15:57:03.781760 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-w8x5z" event={"ID":"d7ad408f-56db-4b5b-bea9-ba821eae2b80","Type":"ContainerStarted","Data":"38b85e5df71c3d01a6abf19999f1840430fce94b3ac1abee5e699f0c0326a5d4"} Mar 20 15:57:03 crc kubenswrapper[4730]: I0320 15:57:03.797387 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-c674c5965-6f2w8" event={"ID":"db4a9305-eefd-4804-ac7a-4d811bd928f5","Type":"ContainerStarted","Data":"9cb30f7561968d0045bfd9720752e9e4b74aedd5bd67f8abf662c7010c4d21fd"} Mar 20 15:57:03 crc kubenswrapper[4730]: I0320 15:57:03.798822 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-lrpjm" event={"ID":"cdbd62c8-9960-4257-87d9-d4923c7ef8dd","Type":"ContainerStarted","Data":"581a1a413334b45b9a22d9be8ff637ffea65dd102aafa5ad0261cc439f275cee"} Mar 20 15:57:03 crc kubenswrapper[4730]: I0320 15:57:03.799830 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-rnx2d" event={"ID":"19a5ba3c-9f89-43f6-bd55-6998df2e3533","Type":"ContainerStarted","Data":"b93f7cbc0548701886f61a08ae3045c3ac97aca16082cd09f17ae6f91edbe0fd"} Mar 20 15:57:03 crc kubenswrapper[4730]: I0320 15:57:03.802869 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nwwzc" event={"ID":"e8ad6f56-863f-473b-a4d4-d4f70d9489a4","Type":"ContainerStarted","Data":"5b1fc862b3295795aec3d62f2d172208c4189c4a71c75481fd169af14696d7c5"} Mar 20 15:57:03 crc kubenswrapper[4730]: E0320 15:57:03.817587 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:12841b27173f5f1beeb83112e057c8753f4cf411f583fba4f0610fac0f60b7ad\\\"\"" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nwwzc" podUID="e8ad6f56-863f-473b-a4d4-d4f70d9489a4" Mar 20 15:57:03 crc kubenswrapper[4730]: I0320 15:57:03.842535 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-wqqnd" event={"ID":"c5aaa9e9-aebc-4daa-b7ab-c6064b5a78ef","Type":"ContainerStarted","Data":"a7fea7c7594813ef3dcae93e9bcbb34de39b987c788f796f8de9678836f63fe2"} Mar 20 15:57:03 crc kubenswrapper[4730]: E0320 15:57:03.864668 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:d8210bb21d4d298271a7b43f92fe58789393546e616aaaec1ce71bb2a754e777\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-wqqnd" podUID="c5aaa9e9-aebc-4daa-b7ab-c6064b5a78ef" Mar 20 15:57:03 crc kubenswrapper[4730]: I0320 15:57:03.868091 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-llp6b" event={"ID":"d658514c-f369-4ce2-ad50-d055fd208694","Type":"ContainerStarted","Data":"f079d7f9ca767ac45da6e292b1bcb8f259c63524f1476c80f910852835296af9"} Mar 20 15:57:03 crc kubenswrapper[4730]: I0320 15:57:03.905109 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c5858c67b-cfmtk" event={"ID":"f00b4813-358d-49c4-bf9d-486e35f5a94f","Type":"ContainerStarted","Data":"6b95d0657ecda55b2b17bbf5242d6b63c057c1735705995ac8dacddff38955c5"} Mar 20 15:57:03 crc kubenswrapper[4730]: E0320 15:57:03.914753 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.147:5001/openstack-k8s-operators/watcher-operator:ee00c2d330b27d46c48ac29a20680b56ca50df3c\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6c5858c67b-cfmtk" podUID="f00b4813-358d-49c4-bf9d-486e35f5a94f" Mar 20 15:57:03 crc kubenswrapper[4730]: I0320 15:57:03.954467 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mdzv5" event={"ID":"82cae974-2029-42c3-81bf-e9bee167e991","Type":"ContainerStarted","Data":"0e906f2f4a3a756fd43e424eebf93c5dceee8ef81562506c0f023281c9ffa95b"} Mar 20 15:57:03 crc kubenswrapper[4730]: E0320 15:57:03.963396 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mdzv5" podUID="82cae974-2029-42c3-81bf-e9bee167e991" Mar 20 15:57:03 crc kubenswrapper[4730]: I0320 15:57:03.965621 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bm7hr" event={"ID":"92c29eff-b9ab-4420-86c6-6b388cfc87af","Type":"ContainerStarted","Data":"2360582b0caa13e57019f3e37d3c50b63cf2025f962d5b3b40517954b864fc8c"} Mar 20 15:57:04 crc kubenswrapper[4730]: I0320 15:57:04.000920 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d8b68e41-b53d-4fb3-8a86-0c604cda0e46-cert\") pod \"infra-operator-controller-manager-7b9c774f96-4pkr9\" (UID: \"d8b68e41-b53d-4fb3-8a86-0c604cda0e46\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-4pkr9" Mar 20 15:57:04 crc kubenswrapper[4730]: E0320 15:57:04.001065 4730 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 20 15:57:04 crc kubenswrapper[4730]: E0320 15:57:04.001107 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8b68e41-b53d-4fb3-8a86-0c604cda0e46-cert podName:d8b68e41-b53d-4fb3-8a86-0c604cda0e46 nodeName:}" failed. No retries permitted until 2026-03-20 15:57:08.001093635 +0000 UTC m=+1087.214465014 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d8b68e41-b53d-4fb3-8a86-0c604cda0e46-cert") pod "infra-operator-controller-manager-7b9c774f96-4pkr9" (UID: "d8b68e41-b53d-4fb3-8a86-0c604cda0e46") : secret "infra-operator-webhook-server-cert" not found Mar 20 15:57:04 crc kubenswrapper[4730]: I0320 15:57:04.318022 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8f74be61-d309-417c-90a3-2962b57071c4-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-f8l2x\" (UID: \"8f74be61-d309-417c-90a3-2962b57071c4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-f8l2x" Mar 20 15:57:04 crc kubenswrapper[4730]: E0320 15:57:04.318268 4730 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 15:57:04 crc kubenswrapper[4730]: E0320 15:57:04.318432 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f74be61-d309-417c-90a3-2962b57071c4-cert podName:8f74be61-d309-417c-90a3-2962b57071c4 nodeName:}" failed. No retries permitted until 2026-03-20 15:57:08.318416107 +0000 UTC m=+1087.531787476 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8f74be61-d309-417c-90a3-2962b57071c4-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-f8l2x" (UID: "8f74be61-d309-417c-90a3-2962b57071c4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 15:57:04 crc kubenswrapper[4730]: I0320 15:57:04.736529 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-metrics-certs\") pod \"openstack-operator-controller-manager-6f58c59cbb-76ssq\" (UID: \"c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008\") " pod="openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq" Mar 20 15:57:04 crc kubenswrapper[4730]: I0320 15:57:04.736571 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-webhook-certs\") pod \"openstack-operator-controller-manager-6f58c59cbb-76ssq\" (UID: \"c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008\") " pod="openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq" Mar 20 15:57:04 crc kubenswrapper[4730]: E0320 15:57:04.736730 4730 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 20 15:57:04 crc kubenswrapper[4730]: E0320 15:57:04.736783 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-webhook-certs podName:c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008 nodeName:}" failed. No retries permitted until 2026-03-20 15:57:08.736769462 +0000 UTC m=+1087.950140831 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-webhook-certs") pod "openstack-operator-controller-manager-6f58c59cbb-76ssq" (UID: "c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008") : secret "webhook-server-cert" not found Mar 20 15:57:04 crc kubenswrapper[4730]: E0320 15:57:04.737089 4730 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 20 15:57:04 crc kubenswrapper[4730]: E0320 15:57:04.737157 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-metrics-certs podName:c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008 nodeName:}" failed. No retries permitted until 2026-03-20 15:57:08.737106542 +0000 UTC m=+1087.950477911 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-metrics-certs") pod "openstack-operator-controller-manager-6f58c59cbb-76ssq" (UID: "c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008") : secret "metrics-server-cert" not found Mar 20 15:57:04 crc kubenswrapper[4730]: E0320 15:57:04.990131 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:12841b27173f5f1beeb83112e057c8753f4cf411f583fba4f0610fac0f60b7ad\\\"\"" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nwwzc" podUID="e8ad6f56-863f-473b-a4d4-d4f70d9489a4" Mar 20 15:57:04 crc kubenswrapper[4730]: E0320 15:57:04.990805 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.147:5001/openstack-k8s-operators/watcher-operator:ee00c2d330b27d46c48ac29a20680b56ca50df3c\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6c5858c67b-cfmtk" podUID="f00b4813-358d-49c4-bf9d-486e35f5a94f" Mar 20 15:57:04 crc kubenswrapper[4730]: E0320 15:57:04.990912 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mdzv5" podUID="82cae974-2029-42c3-81bf-e9bee167e991" Mar 20 15:57:04 crc kubenswrapper[4730]: E0320 15:57:04.991331 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:d8210bb21d4d298271a7b43f92fe58789393546e616aaaec1ce71bb2a754e777\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-wqqnd" podUID="c5aaa9e9-aebc-4daa-b7ab-c6064b5a78ef" Mar 20 15:57:08 crc kubenswrapper[4730]: I0320 15:57:08.098597 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d8b68e41-b53d-4fb3-8a86-0c604cda0e46-cert\") pod \"infra-operator-controller-manager-7b9c774f96-4pkr9\" (UID: \"d8b68e41-b53d-4fb3-8a86-0c604cda0e46\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-4pkr9" Mar 20 15:57:08 crc kubenswrapper[4730]: E0320 15:57:08.098762 4730 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 20 15:57:08 crc kubenswrapper[4730]: E0320 15:57:08.099129 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8b68e41-b53d-4fb3-8a86-0c604cda0e46-cert podName:d8b68e41-b53d-4fb3-8a86-0c604cda0e46 nodeName:}" failed. No retries permitted until 2026-03-20 15:57:16.099088953 +0000 UTC m=+1095.312460322 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d8b68e41-b53d-4fb3-8a86-0c604cda0e46-cert") pod "infra-operator-controller-manager-7b9c774f96-4pkr9" (UID: "d8b68e41-b53d-4fb3-8a86-0c604cda0e46") : secret "infra-operator-webhook-server-cert" not found Mar 20 15:57:08 crc kubenswrapper[4730]: I0320 15:57:08.403567 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8f74be61-d309-417c-90a3-2962b57071c4-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-f8l2x\" (UID: \"8f74be61-d309-417c-90a3-2962b57071c4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-f8l2x" Mar 20 15:57:08 crc kubenswrapper[4730]: E0320 15:57:08.403741 4730 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 15:57:08 crc kubenswrapper[4730]: E0320 15:57:08.403820 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f74be61-d309-417c-90a3-2962b57071c4-cert podName:8f74be61-d309-417c-90a3-2962b57071c4 nodeName:}" failed. No retries permitted until 2026-03-20 15:57:16.403801416 +0000 UTC m=+1095.617172785 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8f74be61-d309-417c-90a3-2962b57071c4-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-f8l2x" (UID: "8f74be61-d309-417c-90a3-2962b57071c4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 15:57:08 crc kubenswrapper[4730]: I0320 15:57:08.807913 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-metrics-certs\") pod \"openstack-operator-controller-manager-6f58c59cbb-76ssq\" (UID: \"c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008\") " pod="openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq" Mar 20 15:57:08 crc kubenswrapper[4730]: I0320 15:57:08.807956 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-webhook-certs\") pod \"openstack-operator-controller-manager-6f58c59cbb-76ssq\" (UID: \"c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008\") " pod="openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq" Mar 20 15:57:08 crc kubenswrapper[4730]: E0320 15:57:08.808112 4730 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 20 15:57:08 crc kubenswrapper[4730]: E0320 15:57:08.808157 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-webhook-certs podName:c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008 nodeName:}" failed. No retries permitted until 2026-03-20 15:57:16.80814336 +0000 UTC m=+1096.021514729 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-webhook-certs") pod "openstack-operator-controller-manager-6f58c59cbb-76ssq" (UID: "c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008") : secret "webhook-server-cert" not found Mar 20 15:57:08 crc kubenswrapper[4730]: E0320 15:57:08.808459 4730 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 20 15:57:08 crc kubenswrapper[4730]: E0320 15:57:08.808488 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-metrics-certs podName:c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008 nodeName:}" failed. No retries permitted until 2026-03-20 15:57:16.80848074 +0000 UTC m=+1096.021852109 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-metrics-certs") pod "openstack-operator-controller-manager-6f58c59cbb-76ssq" (UID: "c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008") : secret "metrics-server-cert" not found Mar 20 15:57:12 crc kubenswrapper[4730]: I0320 15:57:12.880024 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 15:57:12 crc kubenswrapper[4730]: I0320 15:57:12.880666 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 15:57:14 crc kubenswrapper[4730]: E0320 15:57:14.748090 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:703ad3a2b749bce100f1e2a445312b65dc3b8b45e8c8ba59f311d3f8f3368113" Mar 20 15:57:14 crc kubenswrapper[4730]: E0320 15:57:14.748754 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:703ad3a2b749bce100f1e2a445312b65dc3b8b45e8c8ba59f311d3f8f3368113,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2rdf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-8464cc45fb-pf8sw_openstack-operators(f733406e-5258-4cfe-870d-4fb86152363e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 15:57:14 crc kubenswrapper[4730]: E0320 15:57:14.750062 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-pf8sw" podUID="f733406e-5258-4cfe-870d-4fb86152363e" Mar 20 15:57:15 crc kubenswrapper[4730]: E0320 15:57:15.053295 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:703ad3a2b749bce100f1e2a445312b65dc3b8b45e8c8ba59f311d3f8f3368113\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-pf8sw" podUID="f733406e-5258-4cfe-870d-4fb86152363e" Mar 20 15:57:15 crc kubenswrapper[4730]: E0320 15:57:15.322043 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:c500fa7080b94105e85eeced772d8872e4168904e74ba02116e15ab66f522444" Mar 20 15:57:15 crc kubenswrapper[4730]: E0320 15:57:15.322197 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:c500fa7080b94105e85eeced772d8872e4168904e74ba02116e15ab66f522444,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d5h4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-d6b694c5-lrpjm_openstack-operators(cdbd62c8-9960-4257-87d9-d4923c7ef8dd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 15:57:15 crc kubenswrapper[4730]: E0320 15:57:15.323370 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-lrpjm" podUID="cdbd62c8-9960-4257-87d9-d4923c7ef8dd" Mar 20 15:57:15 crc kubenswrapper[4730]: E0320 15:57:15.844760 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:ec36a9083657587022f8471c9d5a71b87a7895398496e7fc546c73aa1eae4b56" Mar 20 15:57:15 crc kubenswrapper[4730]: E0320 15:57:15.845300 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:ec36a9083657587022f8471c9d5a71b87a7895398496e7fc546c73aa1eae4b56,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vsqkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-768b96df4c-g4kgd_openstack-operators(cf3ded14-d81b-4384-93e4-e51cde6a31ec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 15:57:15 crc kubenswrapper[4730]: E0320 15:57:15.846573 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-g4kgd" podUID="cf3ded14-d81b-4384-93e4-e51cde6a31ec" Mar 20 15:57:16 crc kubenswrapper[4730]: E0320 15:57:16.059795 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:ec36a9083657587022f8471c9d5a71b87a7895398496e7fc546c73aa1eae4b56\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-g4kgd" podUID="cf3ded14-d81b-4384-93e4-e51cde6a31ec" Mar 20 15:57:16 crc kubenswrapper[4730]: E0320 15:57:16.059779 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:c500fa7080b94105e85eeced772d8872e4168904e74ba02116e15ab66f522444\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-lrpjm" podUID="cdbd62c8-9960-4257-87d9-d4923c7ef8dd" Mar 20 15:57:16 crc kubenswrapper[4730]: I0320 15:57:16.116082 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d8b68e41-b53d-4fb3-8a86-0c604cda0e46-cert\") pod \"infra-operator-controller-manager-7b9c774f96-4pkr9\" (UID: \"d8b68e41-b53d-4fb3-8a86-0c604cda0e46\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-4pkr9" Mar 20 15:57:16 crc kubenswrapper[4730]: I0320 15:57:16.131289 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d8b68e41-b53d-4fb3-8a86-0c604cda0e46-cert\") pod \"infra-operator-controller-manager-7b9c774f96-4pkr9\" (UID: \"d8b68e41-b53d-4fb3-8a86-0c604cda0e46\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-4pkr9" Mar 20 15:57:16 crc kubenswrapper[4730]: I0320 15:57:16.305741 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-hg8js" Mar 20 15:57:16 crc kubenswrapper[4730]: I0320 15:57:16.314945 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-4pkr9" Mar 20 15:57:16 crc kubenswrapper[4730]: E0320 15:57:16.361135 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:866844c5b88e1e0518ceb7490cac9d093da3fb8b2f27ba7bd9bd89f946b9ee6e" Mar 20 15:57:16 crc kubenswrapper[4730]: E0320 15:57:16.361369 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:866844c5b88e1e0518ceb7490cac9d093da3fb8b2f27ba7bd9bd89f946b9ee6e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5jrvs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-c674c5965-6f2w8_openstack-operators(db4a9305-eefd-4804-ac7a-4d811bd928f5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 15:57:16 crc kubenswrapper[4730]: E0320 15:57:16.362883 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-c674c5965-6f2w8" podUID="db4a9305-eefd-4804-ac7a-4d811bd928f5" Mar 20 15:57:16 crc kubenswrapper[4730]: I0320 15:57:16.421109 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8f74be61-d309-417c-90a3-2962b57071c4-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-f8l2x\" (UID: \"8f74be61-d309-417c-90a3-2962b57071c4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-f8l2x" Mar 20 15:57:16 crc kubenswrapper[4730]: E0320 15:57:16.421441 4730 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 15:57:16 crc kubenswrapper[4730]: E0320 15:57:16.421532 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f74be61-d309-417c-90a3-2962b57071c4-cert podName:8f74be61-d309-417c-90a3-2962b57071c4 nodeName:}" failed. No retries permitted until 2026-03-20 15:57:32.421489865 +0000 UTC m=+1111.634861234 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8f74be61-d309-417c-90a3-2962b57071c4-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-f8l2x" (UID: "8f74be61-d309-417c-90a3-2962b57071c4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 15:57:16 crc kubenswrapper[4730]: I0320 15:57:16.826745 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-metrics-certs\") pod \"openstack-operator-controller-manager-6f58c59cbb-76ssq\" (UID: \"c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008\") " pod="openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq" Mar 20 15:57:16 crc kubenswrapper[4730]: I0320 15:57:16.826797 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-webhook-certs\") pod \"openstack-operator-controller-manager-6f58c59cbb-76ssq\" (UID: \"c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008\") " pod="openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq" Mar 20 15:57:16 crc kubenswrapper[4730]: E0320 15:57:16.826997 4730 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 20 15:57:16 crc kubenswrapper[4730]: E0320 15:57:16.827053 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-webhook-certs podName:c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008 nodeName:}" failed. No retries permitted until 2026-03-20 15:57:32.827039835 +0000 UTC m=+1112.040411194 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-webhook-certs") pod "openstack-operator-controller-manager-6f58c59cbb-76ssq" (UID: "c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008") : secret "webhook-server-cert" not found Mar 20 15:57:16 crc kubenswrapper[4730]: E0320 15:57:16.827069 4730 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 20 15:57:16 crc kubenswrapper[4730]: E0320 15:57:16.827180 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-metrics-certs podName:c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008 nodeName:}" failed. No retries permitted until 2026-03-20 15:57:32.827156199 +0000 UTC m=+1112.040527628 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-metrics-certs") pod "openstack-operator-controller-manager-6f58c59cbb-76ssq" (UID: "c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008") : secret "metrics-server-cert" not found Mar 20 15:57:16 crc kubenswrapper[4730]: E0320 15:57:16.937495 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:425fd66675becbe0ca2b2fe1a5a6694ac6e0b1cdce9a77a7a37f99785eadc74a" Mar 20 15:57:16 crc kubenswrapper[4730]: E0320 15:57:16.937731 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:425fd66675becbe0ca2b2fe1a5a6694ac6e0b1cdce9a77a7a37f99785eadc74a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kkwmj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5b9f45d989-w8x5z_openstack-operators(d7ad408f-56db-4b5b-bea9-ba821eae2b80): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 15:57:16 crc kubenswrapper[4730]: E0320 15:57:16.939700 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-w8x5z" podUID="d7ad408f-56db-4b5b-bea9-ba821eae2b80" Mar 20 15:57:17 crc kubenswrapper[4730]: E0320 15:57:17.068529 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:866844c5b88e1e0518ceb7490cac9d093da3fb8b2f27ba7bd9bd89f946b9ee6e\\\"\"" pod="openstack-operators/swift-operator-controller-manager-c674c5965-6f2w8" podUID="db4a9305-eefd-4804-ac7a-4d811bd928f5" Mar 20 15:57:17 crc kubenswrapper[4730]: E0320 15:57:17.068586 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:425fd66675becbe0ca2b2fe1a5a6694ac6e0b1cdce9a77a7a37f99785eadc74a\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-w8x5z" podUID="d7ad408f-56db-4b5b-bea9-ba821eae2b80" Mar 20 15:57:17 crc kubenswrapper[4730]: E0320 15:57:17.558816 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:bef93f71d3b42a72d8b96c69bdb4db4b8bd797c5093a0a719443d7a5c9aaab55" Mar 20 15:57:17 crc kubenswrapper[4730]: E0320 15:57:17.559305 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:bef93f71d3b42a72d8b96c69bdb4db4b8bd797c5093a0a719443d7a5c9aaab55,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rpdpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-884679f54-t7kkm_openstack-operators(6944c865-92a4-441c-907b-27424898cb99): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 15:57:17 crc kubenswrapper[4730]: E0320 15:57:17.560465 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-884679f54-t7kkm" podUID="6944c865-92a4-441c-907b-27424898cb99" Mar 20 15:57:18 crc kubenswrapper[4730]: E0320 15:57:18.024776 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:9dd26bc51e7757d84736528d4988a1f980ad50ccb070aef6fc252e32c5c423a8" Mar 20 15:57:18 crc kubenswrapper[4730]: E0320 15:57:18.024983 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:9dd26bc51e7757d84736528d4988a1f980ad50ccb070aef6fc252e32c5c423a8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-krfh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6f787dddc9-9k6lh_openstack-operators(24280954-941c-445f-aa52-e360ce544046): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 15:57:18 crc kubenswrapper[4730]: E0320 15:57:18.026509 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-9k6lh" podUID="24280954-941c-445f-aa52-e360ce544046" Mar 20 15:57:18 crc kubenswrapper[4730]: E0320 15:57:18.074289 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:9dd26bc51e7757d84736528d4988a1f980ad50ccb070aef6fc252e32c5c423a8\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-9k6lh" podUID="24280954-941c-445f-aa52-e360ce544046" Mar 20 15:57:18 crc kubenswrapper[4730]: E0320 15:57:18.075212 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:bef93f71d3b42a72d8b96c69bdb4db4b8bd797c5093a0a719443d7a5c9aaab55\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-884679f54-t7kkm" podUID="6944c865-92a4-441c-907b-27424898cb99" Mar 20 15:57:18 crc kubenswrapper[4730]: E0320 15:57:18.656702 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:76a1cde9f29fb39ed715b06be16adb803b9a2e24d68acb369911c0a88e33bc7d" Mar 20 15:57:18 crc kubenswrapper[4730]: E0320 15:57:18.657003 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:76a1cde9f29fb39ed715b06be16adb803b9a2e24d68acb369911c0a88e33bc7d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7cgmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-79df6bcc97-llp6b_openstack-operators(d658514c-f369-4ce2-ad50-d055fd208694): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 15:57:18 crc kubenswrapper[4730]: E0320 15:57:18.659152 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-llp6b" podUID="d658514c-f369-4ce2-ad50-d055fd208694" Mar 20 15:57:18 crc kubenswrapper[4730]: I0320 15:57:18.951458 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7b9c774f96-4pkr9"] Mar 20 15:57:18 crc kubenswrapper[4730]: W0320 15:57:18.964120 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8b68e41_b53d_4fb3_8a86_0c604cda0e46.slice/crio-45848c5168243fc5bf9cef27b7a4afa483350368b8eaf99823d67cdd8005b630 WatchSource:0}: Error finding container 45848c5168243fc5bf9cef27b7a4afa483350368b8eaf99823d67cdd8005b630: Status 404 returned error can't find the container with id 45848c5168243fc5bf9cef27b7a4afa483350368b8eaf99823d67cdd8005b630 Mar 20 15:57:19 crc kubenswrapper[4730]: I0320 15:57:19.085021 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-4pkr9" event={"ID":"d8b68e41-b53d-4fb3-8a86-0c604cda0e46","Type":"ContainerStarted","Data":"45848c5168243fc5bf9cef27b7a4afa483350368b8eaf99823d67cdd8005b630"} Mar 20 15:57:19 crc kubenswrapper[4730]: I0320 15:57:19.089340 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-l7v9q" event={"ID":"36dd23cb-43b2-4c25-9e24-3e2f69f93eff","Type":"ContainerStarted","Data":"9ff53b311f2f8ff5ec20d469511209241f556a39d369fedd4ad7b6d2d624630a"} Mar 20 15:57:19 crc kubenswrapper[4730]: I0320 15:57:19.089699 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-l7v9q" Mar 20 15:57:19 crc kubenswrapper[4730]: I0320 15:57:19.092108 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-v96m5" event={"ID":"acffaecc-dd6c-4819-91cf-99c5d0154143","Type":"ContainerStarted","Data":"a90c4d6afa0b750bfaee685cd5a45009fdac35d3b730420486be1bc882b46468"} Mar 20 15:57:19 crc kubenswrapper[4730]: I0320 15:57:19.092174 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-v96m5" Mar 20 15:57:19 crc kubenswrapper[4730]: I0320 15:57:19.097028 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-rnx2d" event={"ID":"19a5ba3c-9f89-43f6-bd55-6998df2e3533","Type":"ContainerStarted","Data":"df708a5bd69cf0b999854d5a59081b33481add9b718024ef449b1b8e6c7d877c"} Mar 20 15:57:19 crc kubenswrapper[4730]: I0320 15:57:19.097202 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-rnx2d" Mar 20 15:57:19 crc kubenswrapper[4730]: E0320 15:57:19.100075 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:76a1cde9f29fb39ed715b06be16adb803b9a2e24d68acb369911c0a88e33bc7d\\\"\"" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-llp6b" podUID="d658514c-f369-4ce2-ad50-d055fd208694" Mar 20 15:57:19 crc kubenswrapper[4730]: I0320 15:57:19.118018 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-l7v9q" podStartSLOduration=2.919874029 podStartE2EDuration="19.117994963s" podCreationTimestamp="2026-03-20 15:57:00 +0000 UTC" firstStartedPulling="2026-03-20 15:57:02.492544508 +0000 UTC m=+1081.705915877" lastFinishedPulling="2026-03-20 15:57:18.690665432 +0000 UTC m=+1097.904036811" observedRunningTime="2026-03-20 15:57:19.108532603 +0000 UTC m=+1098.321903972" watchObservedRunningTime="2026-03-20 15:57:19.117994963 +0000 UTC m=+1098.331366332" Mar 20 15:57:19 crc kubenswrapper[4730]: I0320 15:57:19.179818 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-rnx2d" podStartSLOduration=3.108321185 podStartE2EDuration="19.179799156s" podCreationTimestamp="2026-03-20 15:57:00 +0000 UTC" firstStartedPulling="2026-03-20 15:57:02.631155062 +0000 UTC m=+1081.844526431" lastFinishedPulling="2026-03-20 15:57:18.702633033 +0000 UTC m=+1097.916004402" observedRunningTime="2026-03-20 15:57:19.17188201 +0000 UTC m=+1098.385253379" watchObservedRunningTime="2026-03-20 15:57:19.179799156 +0000 UTC m=+1098.393170525" Mar 20 15:57:19 crc kubenswrapper[4730]: I0320 15:57:19.189868 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-v96m5" podStartSLOduration=2.762878892 podStartE2EDuration="20.189848942s" podCreationTimestamp="2026-03-20 15:56:59 +0000 UTC" firstStartedPulling="2026-03-20 15:57:01.272798761 +0000 UTC m=+1080.486170130" lastFinishedPulling="2026-03-20 15:57:18.699768811 +0000 UTC m=+1097.913140180" observedRunningTime="2026-03-20 15:57:19.188159874 +0000 UTC m=+1098.401531243" watchObservedRunningTime="2026-03-20 15:57:19.189848942 +0000 UTC m=+1098.403220311" Mar 20 15:57:20 crc kubenswrapper[4730]: I0320 15:57:20.127011 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-55f864c847-bqjxs" event={"ID":"87b37583-ab1d-4f9e-98e9-8cb9bdcc5165","Type":"ContainerStarted","Data":"20f25722465c682576a32ae4c86d08f9fc1d891b4133459452e4ababb33b352c"} Mar 20 15:57:20 crc kubenswrapper[4730]: I0320 15:57:20.127336 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-55f864c847-bqjxs" Mar 20 15:57:20 crc kubenswrapper[4730]: I0320 15:57:20.136144 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5784578c99-lt49w" event={"ID":"9944d85d-4f1c-4312-ac57-49ee75a8fd16","Type":"ContainerStarted","Data":"69ecde21dc19745c54d0c34f06496f5328e8e68eed2a10d90ff03d02fb5cf23a"} Mar 20 15:57:20 crc kubenswrapper[4730]: I0320 15:57:20.136221 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5784578c99-lt49w" Mar 20 15:57:20 crc kubenswrapper[4730]: I0320 15:57:20.138761 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-767865f676-xw6kk" event={"ID":"61755ffd-de91-4a38-a174-fe1a4c57dfd0","Type":"ContainerStarted","Data":"59162b0311a880f7c0c7f4a4fad71dc31d8fed5e2134cde37845cf230e6a1c4e"} Mar 20 15:57:20 crc kubenswrapper[4730]: I0320 15:57:20.138887 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-767865f676-xw6kk" Mar 20 15:57:20 crc kubenswrapper[4730]: I0320 15:57:20.143669 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bm7hr" event={"ID":"92c29eff-b9ab-4420-86c6-6b388cfc87af","Type":"ContainerStarted","Data":"448bf8cc49eec8fef296101660f5fc0c7f56530380214b089ca64e190103ced2"} Mar 20 15:57:20 crc kubenswrapper[4730]: I0320 15:57:20.143771 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bm7hr" Mar 20 15:57:20 crc kubenswrapper[4730]: I0320 15:57:20.146210 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-dmd8z" event={"ID":"4fb51ed6-04e3-40db-ab21-eb0fe66442fe","Type":"ContainerStarted","Data":"ea7deb44b6fc5317f3970450d061efbbcddadb8a2b3117b80c462b39c592a24f"} Mar 20 15:57:20 crc kubenswrapper[4730]: I0320 15:57:20.148275 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-dmd8z" Mar 20 15:57:20 crc kubenswrapper[4730]: I0320 15:57:20.149831 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-55f864c847-bqjxs" podStartSLOduration=3.498451994 podStartE2EDuration="20.149812108s" podCreationTimestamp="2026-03-20 15:57:00 +0000 UTC" firstStartedPulling="2026-03-20 15:57:02.039464952 +0000 UTC m=+1081.252836321" lastFinishedPulling="2026-03-20 15:57:18.690825066 +0000 UTC m=+1097.904196435" observedRunningTime="2026-03-20 15:57:20.145070172 +0000 UTC m=+1099.358441541" watchObservedRunningTime="2026-03-20 15:57:20.149812108 +0000 UTC m=+1099.363183467" Mar 20 15:57:20 crc kubenswrapper[4730]: I0320 15:57:20.161296 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bm7hr" podStartSLOduration=4.139236405 podStartE2EDuration="20.161280875s" podCreationTimestamp="2026-03-20 15:57:00 +0000 UTC" firstStartedPulling="2026-03-20 15:57:02.667405387 +0000 UTC m=+1081.880776756" lastFinishedPulling="2026-03-20 15:57:18.689449867 +0000 UTC m=+1097.902821226" observedRunningTime="2026-03-20 15:57:20.157050894 +0000 UTC m=+1099.370422263" watchObservedRunningTime="2026-03-20 15:57:20.161280875 +0000 UTC m=+1099.374652244" Mar 20 15:57:20 crc kubenswrapper[4730]: I0320 15:57:20.174317 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5784578c99-lt49w" podStartSLOduration=3.980947599 podStartE2EDuration="20.174297606s" podCreationTimestamp="2026-03-20 15:57:00 +0000 UTC" firstStartedPulling="2026-03-20 15:57:02.506378083 +0000 UTC m=+1081.719749452" lastFinishedPulling="2026-03-20 15:57:18.69972809 +0000 UTC m=+1097.913099459" observedRunningTime="2026-03-20 15:57:20.172389812 +0000 UTC m=+1099.385761181" watchObservedRunningTime="2026-03-20 15:57:20.174297606 +0000 UTC m=+1099.387668975" Mar 20 15:57:20 crc kubenswrapper[4730]: I0320 15:57:20.192001 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-767865f676-xw6kk" podStartSLOduration=3.928321668 podStartE2EDuration="20.191978411s" podCreationTimestamp="2026-03-20 15:57:00 +0000 UTC" firstStartedPulling="2026-03-20 15:57:02.438327881 +0000 UTC m=+1081.651699250" lastFinishedPulling="2026-03-20 15:57:18.701984624 +0000 UTC m=+1097.915355993" observedRunningTime="2026-03-20 15:57:20.189383277 +0000 UTC m=+1099.402754666" watchObservedRunningTime="2026-03-20 15:57:20.191978411 +0000 UTC m=+1099.405349790" Mar 20 15:57:20 crc kubenswrapper[4730]: I0320 15:57:20.215018 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-dmd8z" podStartSLOduration=3.5115083289999998 podStartE2EDuration="21.215002257s" podCreationTimestamp="2026-03-20 15:56:59 +0000 UTC" firstStartedPulling="2026-03-20 15:57:00.986149704 +0000 UTC m=+1080.199521073" lastFinishedPulling="2026-03-20 15:57:18.689643612 +0000 UTC m=+1097.903015001" observedRunningTime="2026-03-20 15:57:20.208966095 +0000 UTC m=+1099.422337474" watchObservedRunningTime="2026-03-20 15:57:20.215002257 +0000 UTC m=+1099.428373626" Mar 20 15:57:25 crc kubenswrapper[4730]: I0320 15:57:25.194690 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-4pkr9" event={"ID":"d8b68e41-b53d-4fb3-8a86-0c604cda0e46","Type":"ContainerStarted","Data":"5fb0ff525af3cf18991d023fc39f7fba44f374ac1e9aa04bdc8441f66fe1756f"} Mar 20 15:57:25 crc kubenswrapper[4730]: I0320 15:57:25.195172 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-4pkr9" Mar 20 15:57:25 crc kubenswrapper[4730]: I0320 15:57:25.206603 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c5858c67b-cfmtk" event={"ID":"f00b4813-358d-49c4-bf9d-486e35f5a94f","Type":"ContainerStarted","Data":"017d151d5e07cd8c9d38dcba58fdcc0910cdd019aa668a50bf0bc303aec28023"} Mar 20 15:57:25 crc kubenswrapper[4730]: I0320 15:57:25.207560 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6c5858c67b-cfmtk" Mar 20 15:57:25 crc kubenswrapper[4730]: I0320 15:57:25.211198 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mdzv5" event={"ID":"82cae974-2029-42c3-81bf-e9bee167e991","Type":"ContainerStarted","Data":"4f271b3f9738e4f9672107879d5284ffdb334b69bf42c7a9be9d6c97c168cb77"} Mar 20 15:57:25 crc kubenswrapper[4730]: I0320 15:57:25.214191 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nwwzc" event={"ID":"e8ad6f56-863f-473b-a4d4-d4f70d9489a4","Type":"ContainerStarted","Data":"796b15e07a238d32353575620108c25121126d0ec7fd9b43a7d6fa102c17301a"} Mar 20 15:57:25 crc kubenswrapper[4730]: I0320 15:57:25.214448 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nwwzc" Mar 20 15:57:25 crc kubenswrapper[4730]: I0320 15:57:25.216744 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-wqqnd" event={"ID":"c5aaa9e9-aebc-4daa-b7ab-c6064b5a78ef","Type":"ContainerStarted","Data":"fe45ed2582e95ad4f73c51c40d883f2f26a7c1cf741e5dccb7005bb8f04578be"} Mar 20 15:57:25 crc kubenswrapper[4730]: I0320 15:57:25.217010 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-wqqnd" Mar 20 15:57:25 crc kubenswrapper[4730]: I0320 15:57:25.232334 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-4pkr9" podStartSLOduration=20.165020031 podStartE2EDuration="25.232310572s" podCreationTimestamp="2026-03-20 15:57:00 +0000 UTC" firstStartedPulling="2026-03-20 15:57:18.988819137 +0000 UTC m=+1098.202190506" lastFinishedPulling="2026-03-20 15:57:24.056109678 +0000 UTC m=+1103.269481047" observedRunningTime="2026-03-20 15:57:25.226581159 +0000 UTC m=+1104.439952558" watchObservedRunningTime="2026-03-20 15:57:25.232310572 +0000 UTC m=+1104.445681981" Mar 20 15:57:25 crc kubenswrapper[4730]: I0320 15:57:25.250617 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6c5858c67b-cfmtk" podStartSLOduration=3.901627385 podStartE2EDuration="25.250587952s" podCreationTimestamp="2026-03-20 15:57:00 +0000 UTC" firstStartedPulling="2026-03-20 15:57:02.701031756 +0000 UTC m=+1081.914403125" lastFinishedPulling="2026-03-20 15:57:24.049992323 +0000 UTC m=+1103.263363692" observedRunningTime="2026-03-20 15:57:25.245435205 +0000 UTC m=+1104.458806584" watchObservedRunningTime="2026-03-20 15:57:25.250587952 +0000 UTC m=+1104.463959351" Mar 20 15:57:25 crc kubenswrapper[4730]: I0320 15:57:25.276323 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nwwzc" podStartSLOduration=4.916952709 podStartE2EDuration="26.276293052s" podCreationTimestamp="2026-03-20 15:56:59 +0000 UTC" firstStartedPulling="2026-03-20 15:57:02.69662319 +0000 UTC m=+1081.909994559" lastFinishedPulling="2026-03-20 15:57:24.055963533 +0000 UTC m=+1103.269334902" observedRunningTime="2026-03-20 15:57:25.267983906 +0000 UTC m=+1104.481355315" watchObservedRunningTime="2026-03-20 15:57:25.276293052 +0000 UTC m=+1104.489664461" Mar 20 15:57:25 crc kubenswrapper[4730]: I0320 15:57:25.314660 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mdzv5" podStartSLOduration=3.960155757 podStartE2EDuration="25.314639471s" podCreationTimestamp="2026-03-20 15:57:00 +0000 UTC" firstStartedPulling="2026-03-20 15:57:02.738291279 +0000 UTC m=+1081.951662648" lastFinishedPulling="2026-03-20 15:57:24.092774973 +0000 UTC m=+1103.306146362" observedRunningTime="2026-03-20 15:57:25.288281382 +0000 UTC m=+1104.501652771" watchObservedRunningTime="2026-03-20 15:57:25.314639471 +0000 UTC m=+1104.528010850" Mar 20 15:57:25 crc kubenswrapper[4730]: I0320 15:57:25.348625 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-wqqnd" podStartSLOduration=4.998073374 podStartE2EDuration="26.348605276s" podCreationTimestamp="2026-03-20 15:56:59 +0000 UTC" firstStartedPulling="2026-03-20 15:57:02.700749538 +0000 UTC m=+1081.914120907" lastFinishedPulling="2026-03-20 15:57:24.05128144 +0000 UTC m=+1103.264652809" observedRunningTime="2026-03-20 15:57:25.341925677 +0000 UTC m=+1104.555297056" watchObservedRunningTime="2026-03-20 15:57:25.348605276 +0000 UTC m=+1104.561976655" Mar 20 15:57:27 crc kubenswrapper[4730]: I0320 15:57:27.238571 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-g4kgd" event={"ID":"cf3ded14-d81b-4384-93e4-e51cde6a31ec","Type":"ContainerStarted","Data":"2ba359c5692f05c532cfa9341dede45b2c57a3bf1dc1e5d078311209dec28d5a"} Mar 20 15:57:27 crc kubenswrapper[4730]: I0320 15:57:27.239485 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-g4kgd" Mar 20 15:57:27 crc kubenswrapper[4730]: I0320 15:57:27.263607 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-g4kgd" podStartSLOduration=2.826154018 podStartE2EDuration="27.263581764s" podCreationTimestamp="2026-03-20 15:57:00 +0000 UTC" firstStartedPulling="2026-03-20 15:57:02.502478391 +0000 UTC m=+1081.715849770" lastFinishedPulling="2026-03-20 15:57:26.939906157 +0000 UTC m=+1106.153277516" observedRunningTime="2026-03-20 15:57:27.255783972 +0000 UTC m=+1106.469155381" watchObservedRunningTime="2026-03-20 15:57:27.263581764 +0000 UTC m=+1106.476953173" Mar 20 15:57:28 crc kubenswrapper[4730]: I0320 15:57:28.246856 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-w8x5z" event={"ID":"d7ad408f-56db-4b5b-bea9-ba821eae2b80","Type":"ContainerStarted","Data":"b88b5c0c25cd9b6cd8aa454324911b4841e83a5be7ccb7cf16bc6eb6e133f3f2"} Mar 20 15:57:28 crc kubenswrapper[4730]: I0320 15:57:28.247768 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-w8x5z" Mar 20 15:57:28 crc kubenswrapper[4730]: I0320 15:57:28.267292 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-w8x5z" podStartSLOduration=2.902335698 podStartE2EDuration="28.267276181s" podCreationTimestamp="2026-03-20 15:57:00 +0000 UTC" firstStartedPulling="2026-03-20 15:57:02.638673907 +0000 UTC m=+1081.852045276" lastFinishedPulling="2026-03-20 15:57:28.00361438 +0000 UTC m=+1107.216985759" observedRunningTime="2026-03-20 15:57:28.260890709 +0000 UTC m=+1107.474262068" watchObservedRunningTime="2026-03-20 15:57:28.267276181 +0000 UTC m=+1107.480647550" Mar 20 15:57:30 crc kubenswrapper[4730]: I0320 15:57:30.266436 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-dmd8z" Mar 20 15:57:30 crc kubenswrapper[4730]: I0320 15:57:30.330886 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-v96m5" Mar 20 15:57:30 crc kubenswrapper[4730]: I0320 15:57:30.490989 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-55f864c847-bqjxs" Mar 20 15:57:30 crc kubenswrapper[4730]: I0320 15:57:30.525538 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-rnx2d" Mar 20 15:57:30 crc kubenswrapper[4730]: I0320 15:57:30.542044 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-wqqnd" Mar 20 15:57:30 crc kubenswrapper[4730]: I0320 15:57:30.578137 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nwwzc" Mar 20 15:57:30 crc kubenswrapper[4730]: I0320 15:57:30.733775 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-767865f676-xw6kk" Mar 20 15:57:30 crc kubenswrapper[4730]: I0320 15:57:30.789586 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-l7v9q" Mar 20 15:57:30 crc kubenswrapper[4730]: I0320 15:57:30.917305 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5784578c99-lt49w" Mar 20 15:57:31 crc kubenswrapper[4730]: I0320 15:57:31.025665 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bm7hr" Mar 20 15:57:31 crc kubenswrapper[4730]: I0320 15:57:31.064336 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6c5858c67b-cfmtk" Mar 20 15:57:31 crc kubenswrapper[4730]: I0320 15:57:31.271028 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-c674c5965-6f2w8" event={"ID":"db4a9305-eefd-4804-ac7a-4d811bd928f5","Type":"ContainerStarted","Data":"2e5e47598bc490cf46a7f146d14a0101f676bf93573bd08ec5fa627f467be2bd"} Mar 20 15:57:31 crc kubenswrapper[4730]: I0320 15:57:31.271222 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-c674c5965-6f2w8" Mar 20 15:57:31 crc kubenswrapper[4730]: I0320 15:57:31.286485 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-c674c5965-6f2w8" podStartSLOduration=2.917380278 podStartE2EDuration="31.286471802s" podCreationTimestamp="2026-03-20 15:57:00 +0000 UTC" firstStartedPulling="2026-03-20 15:57:02.658790381 +0000 UTC m=+1081.872161750" lastFinishedPulling="2026-03-20 15:57:31.027881905 +0000 UTC m=+1110.241253274" observedRunningTime="2026-03-20 15:57:31.284048703 +0000 UTC m=+1110.497420082" watchObservedRunningTime="2026-03-20 15:57:31.286471802 +0000 UTC m=+1110.499843171" Mar 20 15:57:32 crc kubenswrapper[4730]: I0320 15:57:32.285351 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-pf8sw" event={"ID":"f733406e-5258-4cfe-870d-4fb86152363e","Type":"ContainerStarted","Data":"d6839305cb3287a7265c266d8c0b765de6ece764de0fad970fcb7410eee58982"} Mar 20 15:57:32 crc kubenswrapper[4730]: I0320 15:57:32.287062 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-pf8sw" Mar 20 15:57:32 crc kubenswrapper[4730]: I0320 15:57:32.320115 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-pf8sw" podStartSLOduration=3.611944902 podStartE2EDuration="33.320094059s" podCreationTimestamp="2026-03-20 15:56:59 +0000 UTC" firstStartedPulling="2026-03-20 15:57:01.474407352 +0000 UTC m=+1080.687778721" lastFinishedPulling="2026-03-20 15:57:31.182556509 +0000 UTC m=+1110.395927878" observedRunningTime="2026-03-20 15:57:32.310198918 +0000 UTC m=+1111.523570297" watchObservedRunningTime="2026-03-20 15:57:32.320094059 +0000 UTC m=+1111.533465448" Mar 20 15:57:32 crc kubenswrapper[4730]: I0320 15:57:32.489931 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8f74be61-d309-417c-90a3-2962b57071c4-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-f8l2x\" (UID: \"8f74be61-d309-417c-90a3-2962b57071c4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-f8l2x" Mar 20 15:57:32 crc kubenswrapper[4730]: I0320 15:57:32.499019 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8f74be61-d309-417c-90a3-2962b57071c4-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-f8l2x\" (UID: \"8f74be61-d309-417c-90a3-2962b57071c4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-f8l2x" Mar 20 15:57:32 crc kubenswrapper[4730]: I0320 15:57:32.645018 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-57nkr" Mar 20 15:57:32 crc kubenswrapper[4730]: I0320 15:57:32.654093 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-f8l2x" Mar 20 15:57:32 crc kubenswrapper[4730]: I0320 15:57:32.896989 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-webhook-certs\") pod \"openstack-operator-controller-manager-6f58c59cbb-76ssq\" (UID: \"c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008\") " pod="openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq" Mar 20 15:57:32 crc kubenswrapper[4730]: I0320 15:57:32.897434 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-metrics-certs\") pod \"openstack-operator-controller-manager-6f58c59cbb-76ssq\" (UID: \"c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008\") " pod="openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq" Mar 20 15:57:32 crc kubenswrapper[4730]: I0320 15:57:32.902152 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-webhook-certs\") pod \"openstack-operator-controller-manager-6f58c59cbb-76ssq\" (UID: \"c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008\") " pod="openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq" Mar 20 15:57:32 crc kubenswrapper[4730]: I0320 15:57:32.904439 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008-metrics-certs\") pod \"openstack-operator-controller-manager-6f58c59cbb-76ssq\" (UID: \"c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008\") " pod="openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq" Mar 20 15:57:33 crc kubenswrapper[4730]: I0320 15:57:33.082000 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-f8l2x"] Mar 20 15:57:33 crc kubenswrapper[4730]: I0320 15:57:33.084619 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-nctfr" Mar 20 15:57:33 crc kubenswrapper[4730]: I0320 15:57:33.090528 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq" Mar 20 15:57:33 crc kubenswrapper[4730]: W0320 15:57:33.091301 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f74be61_d309_417c_90a3_2962b57071c4.slice/crio-078707346a29bd1255e679676c6e90f14c0c76fae721b0b15b5642fc1d59b4ee WatchSource:0}: Error finding container 078707346a29bd1255e679676c6e90f14c0c76fae721b0b15b5642fc1d59b4ee: Status 404 returned error can't find the container with id 078707346a29bd1255e679676c6e90f14c0c76fae721b0b15b5642fc1d59b4ee Mar 20 15:57:33 crc kubenswrapper[4730]: I0320 15:57:33.308218 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-f8l2x" event={"ID":"8f74be61-d309-417c-90a3-2962b57071c4","Type":"ContainerStarted","Data":"078707346a29bd1255e679676c6e90f14c0c76fae721b0b15b5642fc1d59b4ee"} Mar 20 15:57:33 crc kubenswrapper[4730]: I0320 15:57:33.337784 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq"] Mar 20 15:57:33 crc kubenswrapper[4730]: W0320 15:57:33.347161 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc66e7fcc_f4ab_4d70_ad2b_b9186a4a2008.slice/crio-7ce315dfb8fa9c5b992fbe6ce1bc45ec6438bde4e26c9d2aa087c65f50979de1 WatchSource:0}: Error finding container 7ce315dfb8fa9c5b992fbe6ce1bc45ec6438bde4e26c9d2aa087c65f50979de1: Status 404 returned error can't find the container with id 7ce315dfb8fa9c5b992fbe6ce1bc45ec6438bde4e26c9d2aa087c65f50979de1 Mar 20 15:57:34 crc kubenswrapper[4730]: I0320 15:57:34.316693 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq" event={"ID":"c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008","Type":"ContainerStarted","Data":"c8298b88b014ea89cabc3e6b0ec9d07375465b7f4747f88c5c164a9a6ea1f076"} Mar 20 15:57:34 crc kubenswrapper[4730]: I0320 15:57:34.317326 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq" event={"ID":"c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008","Type":"ContainerStarted","Data":"7ce315dfb8fa9c5b992fbe6ce1bc45ec6438bde4e26c9d2aa087c65f50979de1"} Mar 20 15:57:34 crc kubenswrapper[4730]: I0320 15:57:34.318499 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq" Mar 20 15:57:34 crc kubenswrapper[4730]: I0320 15:57:34.332564 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-884679f54-t7kkm" event={"ID":"6944c865-92a4-441c-907b-27424898cb99","Type":"ContainerStarted","Data":"75f68910e535049be37918a27f66338e1a4493a21f3138874b0a97cef82a892c"} Mar 20 15:57:34 crc kubenswrapper[4730]: I0320 15:57:34.332896 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-884679f54-t7kkm" Mar 20 15:57:34 crc kubenswrapper[4730]: I0320 15:57:34.334013 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-lrpjm" event={"ID":"cdbd62c8-9960-4257-87d9-d4923c7ef8dd","Type":"ContainerStarted","Data":"ed974fa89b66e95c6f70abb9bbef5505331291b3c46c8815467ac7c4644ed4bf"} Mar 20 15:57:34 crc kubenswrapper[4730]: I0320 15:57:34.334544 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-lrpjm" Mar 20 15:57:34 crc kubenswrapper[4730]: I0320 15:57:34.335910 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-9k6lh" event={"ID":"24280954-941c-445f-aa52-e360ce544046","Type":"ContainerStarted","Data":"3b2de06b0db9205a91cd21cef240e85947cdf44a30605b8850a783e857ce58a5"} Mar 20 15:57:34 crc kubenswrapper[4730]: I0320 15:57:34.336301 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-9k6lh" Mar 20 15:57:34 crc kubenswrapper[4730]: I0320 15:57:34.368836 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq" podStartSLOduration=34.368820067 podStartE2EDuration="34.368820067s" podCreationTimestamp="2026-03-20 15:57:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:57:34.35590974 +0000 UTC m=+1113.569281119" watchObservedRunningTime="2026-03-20 15:57:34.368820067 +0000 UTC m=+1113.582191436" Mar 20 15:57:34 crc kubenswrapper[4730]: I0320 15:57:34.370926 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-lrpjm" podStartSLOduration=3.585917131 podStartE2EDuration="34.370919556s" podCreationTimestamp="2026-03-20 15:57:00 +0000 UTC" firstStartedPulling="2026-03-20 15:57:02.660948192 +0000 UTC m=+1081.874319561" lastFinishedPulling="2026-03-20 15:57:33.445950617 +0000 UTC m=+1112.659321986" observedRunningTime="2026-03-20 15:57:34.36823167 +0000 UTC m=+1113.581603049" watchObservedRunningTime="2026-03-20 15:57:34.370919556 +0000 UTC m=+1113.584290925" Mar 20 15:57:34 crc kubenswrapper[4730]: I0320 15:57:34.383899 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-9k6lh" podStartSLOduration=2.9038323200000002 podStartE2EDuration="34.383884855s" podCreationTimestamp="2026-03-20 15:57:00 +0000 UTC" firstStartedPulling="2026-03-20 15:57:02.039667678 +0000 UTC m=+1081.253039047" lastFinishedPulling="2026-03-20 15:57:33.519720213 +0000 UTC m=+1112.733091582" observedRunningTime="2026-03-20 15:57:34.382522386 +0000 UTC m=+1113.595893755" watchObservedRunningTime="2026-03-20 15:57:34.383884855 +0000 UTC m=+1113.597256224" Mar 20 15:57:34 crc kubenswrapper[4730]: I0320 15:57:34.396667 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-884679f54-t7kkm" podStartSLOduration=3.472215752 podStartE2EDuration="34.396648657s" podCreationTimestamp="2026-03-20 15:57:00 +0000 UTC" firstStartedPulling="2026-03-20 15:57:02.411403223 +0000 UTC m=+1081.624774592" lastFinishedPulling="2026-03-20 15:57:33.335836128 +0000 UTC m=+1112.549207497" observedRunningTime="2026-03-20 15:57:34.395903026 +0000 UTC m=+1113.609274395" watchObservedRunningTime="2026-03-20 15:57:34.396648657 +0000 UTC m=+1113.610020026" Mar 20 15:57:35 crc kubenswrapper[4730]: I0320 15:57:35.344306 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-f8l2x" event={"ID":"8f74be61-d309-417c-90a3-2962b57071c4","Type":"ContainerStarted","Data":"70c8fd815396c9bece6283008ba5280cf90bc831678f763c27a770c0bd8ce113"} Mar 20 15:57:35 crc kubenswrapper[4730]: I0320 15:57:35.344849 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-f8l2x" Mar 20 15:57:35 crc kubenswrapper[4730]: I0320 15:57:35.346536 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-llp6b" event={"ID":"d658514c-f369-4ce2-ad50-d055fd208694","Type":"ContainerStarted","Data":"a2959034776d964bb1bbe003c5623b055223b6e55f21e17f7e03a3cb64a6bb33"} Mar 20 15:57:35 crc kubenswrapper[4730]: I0320 15:57:35.395370 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-llp6b" podStartSLOduration=3.938011231 podStartE2EDuration="36.395350193s" podCreationTimestamp="2026-03-20 15:56:59 +0000 UTC" firstStartedPulling="2026-03-20 15:57:02.634309242 +0000 UTC m=+1081.847680601" lastFinishedPulling="2026-03-20 15:57:35.091648194 +0000 UTC m=+1114.305019563" observedRunningTime="2026-03-20 15:57:35.391842883 +0000 UTC m=+1114.605214242" watchObservedRunningTime="2026-03-20 15:57:35.395350193 +0000 UTC m=+1114.608721582" Mar 20 15:57:35 crc kubenswrapper[4730]: I0320 15:57:35.395978 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-f8l2x" podStartSLOduration=33.399455806 podStartE2EDuration="35.39596892s" podCreationTimestamp="2026-03-20 15:57:00 +0000 UTC" firstStartedPulling="2026-03-20 15:57:33.099050341 +0000 UTC m=+1112.312421750" lastFinishedPulling="2026-03-20 15:57:35.095563495 +0000 UTC m=+1114.308934864" observedRunningTime="2026-03-20 15:57:35.379149562 +0000 UTC m=+1114.592520931" watchObservedRunningTime="2026-03-20 15:57:35.39596892 +0000 UTC m=+1114.609340299" Mar 20 15:57:36 crc kubenswrapper[4730]: I0320 15:57:36.321687 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-4pkr9" Mar 20 15:57:40 crc kubenswrapper[4730]: I0320 15:57:40.382721 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-pf8sw" Mar 20 15:57:40 crc kubenswrapper[4730]: I0320 15:57:40.427906 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-9k6lh" Mar 20 15:57:40 crc kubenswrapper[4730]: I0320 15:57:40.584001 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-llp6b" Mar 20 15:57:40 crc kubenswrapper[4730]: I0320 15:57:40.586733 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-llp6b" Mar 20 15:57:40 crc kubenswrapper[4730]: I0320 15:57:40.757654 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-g4kgd" Mar 20 15:57:40 crc kubenswrapper[4730]: I0320 15:57:40.873687 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-w8x5z" Mar 20 15:57:40 crc kubenswrapper[4730]: I0320 15:57:40.881010 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-884679f54-t7kkm" Mar 20 15:57:40 crc kubenswrapper[4730]: I0320 15:57:40.976417 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-c674c5965-6f2w8" Mar 20 15:57:40 crc kubenswrapper[4730]: I0320 15:57:40.997940 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-lrpjm" Mar 20 15:57:42 crc kubenswrapper[4730]: I0320 15:57:42.662057 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-f8l2x" Mar 20 15:57:42 crc kubenswrapper[4730]: I0320 15:57:42.880761 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 15:57:42 crc kubenswrapper[4730]: I0320 15:57:42.880848 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 15:57:43 crc kubenswrapper[4730]: I0320 15:57:43.099273 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6f58c59cbb-76ssq" Mar 20 15:58:00 crc kubenswrapper[4730]: I0320 15:58:00.141371 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567038-qvgqb"] Mar 20 15:58:00 crc kubenswrapper[4730]: I0320 15:58:00.143467 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567038-qvgqb" Mar 20 15:58:00 crc kubenswrapper[4730]: I0320 15:58:00.146861 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 15:58:00 crc kubenswrapper[4730]: I0320 15:58:00.148158 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 15:58:00 crc kubenswrapper[4730]: I0320 15:58:00.150189 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 15:58:00 crc kubenswrapper[4730]: I0320 15:58:00.155925 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567038-qvgqb"] Mar 20 15:58:00 crc kubenswrapper[4730]: I0320 15:58:00.232070 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvhrt\" (UniqueName: \"kubernetes.io/projected/67854402-4e0e-4ebe-b9d4-700669827780-kube-api-access-bvhrt\") pod \"auto-csr-approver-29567038-qvgqb\" (UID: \"67854402-4e0e-4ebe-b9d4-700669827780\") " pod="openshift-infra/auto-csr-approver-29567038-qvgqb" Mar 20 15:58:00 crc kubenswrapper[4730]: I0320 15:58:00.333586 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvhrt\" (UniqueName: \"kubernetes.io/projected/67854402-4e0e-4ebe-b9d4-700669827780-kube-api-access-bvhrt\") pod \"auto-csr-approver-29567038-qvgqb\" (UID: \"67854402-4e0e-4ebe-b9d4-700669827780\") " pod="openshift-infra/auto-csr-approver-29567038-qvgqb" Mar 20 15:58:00 crc kubenswrapper[4730]: I0320 15:58:00.359013 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvhrt\" (UniqueName: \"kubernetes.io/projected/67854402-4e0e-4ebe-b9d4-700669827780-kube-api-access-bvhrt\") pod \"auto-csr-approver-29567038-qvgqb\" (UID: \"67854402-4e0e-4ebe-b9d4-700669827780\") " pod="openshift-infra/auto-csr-approver-29567038-qvgqb" Mar 20 15:58:00 crc kubenswrapper[4730]: I0320 15:58:00.467374 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567038-qvgqb" Mar 20 15:58:00 crc kubenswrapper[4730]: I0320 15:58:00.918778 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567038-qvgqb"] Mar 20 15:58:01 crc kubenswrapper[4730]: I0320 15:58:01.606782 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567038-qvgqb" event={"ID":"67854402-4e0e-4ebe-b9d4-700669827780","Type":"ContainerStarted","Data":"9440066ffdb65a07be2e47ff1767e0405bb8e3440041b92f4bf330c3197708ed"} Mar 20 15:58:02 crc kubenswrapper[4730]: I0320 15:58:02.614130 4730 generic.go:334] "Generic (PLEG): container finished" podID="67854402-4e0e-4ebe-b9d4-700669827780" containerID="caec51e4f1b5d91020f11b5970f403cd0356b8c6fa1f260cecf4ea6e449980f1" exitCode=0 Mar 20 15:58:02 crc kubenswrapper[4730]: I0320 15:58:02.614412 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567038-qvgqb" event={"ID":"67854402-4e0e-4ebe-b9d4-700669827780","Type":"ContainerDied","Data":"caec51e4f1b5d91020f11b5970f403cd0356b8c6fa1f260cecf4ea6e449980f1"} Mar 20 15:58:03 crc kubenswrapper[4730]: I0320 15:58:03.869548 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-784b55c5d9-5wk8n"] Mar 20 15:58:03 crc kubenswrapper[4730]: I0320 15:58:03.870964 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784b55c5d9-5wk8n" Mar 20 15:58:03 crc kubenswrapper[4730]: I0320 15:58:03.873533 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 20 15:58:03 crc kubenswrapper[4730]: I0320 15:58:03.873930 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-6gvmp" Mar 20 15:58:03 crc kubenswrapper[4730]: I0320 15:58:03.874100 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 20 15:58:03 crc kubenswrapper[4730]: I0320 15:58:03.874357 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 20 15:58:03 crc kubenswrapper[4730]: I0320 15:58:03.892568 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-784b55c5d9-5wk8n"] Mar 20 15:58:03 crc kubenswrapper[4730]: I0320 15:58:03.916447 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567038-qvgqb" Mar 20 15:58:03 crc kubenswrapper[4730]: I0320 15:58:03.931413 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bf56b5889-hwkdn"] Mar 20 15:58:03 crc kubenswrapper[4730]: E0320 15:58:03.932546 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67854402-4e0e-4ebe-b9d4-700669827780" containerName="oc" Mar 20 15:58:03 crc kubenswrapper[4730]: I0320 15:58:03.932565 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="67854402-4e0e-4ebe-b9d4-700669827780" containerName="oc" Mar 20 15:58:03 crc kubenswrapper[4730]: I0320 15:58:03.932693 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="67854402-4e0e-4ebe-b9d4-700669827780" containerName="oc" Mar 20 15:58:03 crc kubenswrapper[4730]: I0320 15:58:03.933370 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bf56b5889-hwkdn" Mar 20 15:58:03 crc kubenswrapper[4730]: I0320 15:58:03.935881 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 20 15:58:03 crc kubenswrapper[4730]: I0320 15:58:03.955609 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bf56b5889-hwkdn"] Mar 20 15:58:03 crc kubenswrapper[4730]: I0320 15:58:03.989230 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd9cd10-c13e-446b-9dad-8b30f04de37e-config\") pod \"dnsmasq-dns-784b55c5d9-5wk8n\" (UID: \"dcd9cd10-c13e-446b-9dad-8b30f04de37e\") " pod="openstack/dnsmasq-dns-784b55c5d9-5wk8n" Mar 20 15:58:03 crc kubenswrapper[4730]: I0320 15:58:03.989427 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqsxv\" (UniqueName: \"kubernetes.io/projected/dcd9cd10-c13e-446b-9dad-8b30f04de37e-kube-api-access-rqsxv\") pod \"dnsmasq-dns-784b55c5d9-5wk8n\" (UID: \"dcd9cd10-c13e-446b-9dad-8b30f04de37e\") " pod="openstack/dnsmasq-dns-784b55c5d9-5wk8n" Mar 20 15:58:04 crc kubenswrapper[4730]: I0320 15:58:04.090652 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvhrt\" (UniqueName: \"kubernetes.io/projected/67854402-4e0e-4ebe-b9d4-700669827780-kube-api-access-bvhrt\") pod \"67854402-4e0e-4ebe-b9d4-700669827780\" (UID: \"67854402-4e0e-4ebe-b9d4-700669827780\") " Mar 20 15:58:04 crc kubenswrapper[4730]: I0320 15:58:04.091089 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxh4d\" (UniqueName: \"kubernetes.io/projected/ae05e358-cb9f-4772-a644-8ec5131415eb-kube-api-access-hxh4d\") pod \"dnsmasq-dns-bf56b5889-hwkdn\" (UID: \"ae05e358-cb9f-4772-a644-8ec5131415eb\") " pod="openstack/dnsmasq-dns-bf56b5889-hwkdn" Mar 20 15:58:04 crc kubenswrapper[4730]: I0320 15:58:04.091157 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae05e358-cb9f-4772-a644-8ec5131415eb-dns-svc\") pod \"dnsmasq-dns-bf56b5889-hwkdn\" (UID: \"ae05e358-cb9f-4772-a644-8ec5131415eb\") " pod="openstack/dnsmasq-dns-bf56b5889-hwkdn" Mar 20 15:58:04 crc kubenswrapper[4730]: I0320 15:58:04.091294 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd9cd10-c13e-446b-9dad-8b30f04de37e-config\") pod \"dnsmasq-dns-784b55c5d9-5wk8n\" (UID: \"dcd9cd10-c13e-446b-9dad-8b30f04de37e\") " pod="openstack/dnsmasq-dns-784b55c5d9-5wk8n" Mar 20 15:58:04 crc kubenswrapper[4730]: I0320 15:58:04.091348 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqsxv\" (UniqueName: \"kubernetes.io/projected/dcd9cd10-c13e-446b-9dad-8b30f04de37e-kube-api-access-rqsxv\") pod \"dnsmasq-dns-784b55c5d9-5wk8n\" (UID: \"dcd9cd10-c13e-446b-9dad-8b30f04de37e\") " pod="openstack/dnsmasq-dns-784b55c5d9-5wk8n" Mar 20 15:58:04 crc kubenswrapper[4730]: I0320 15:58:04.091400 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae05e358-cb9f-4772-a644-8ec5131415eb-config\") pod \"dnsmasq-dns-bf56b5889-hwkdn\" (UID: \"ae05e358-cb9f-4772-a644-8ec5131415eb\") " pod="openstack/dnsmasq-dns-bf56b5889-hwkdn" Mar 20 15:58:04 crc kubenswrapper[4730]: I0320 15:58:04.092103 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd9cd10-c13e-446b-9dad-8b30f04de37e-config\") pod \"dnsmasq-dns-784b55c5d9-5wk8n\" (UID: \"dcd9cd10-c13e-446b-9dad-8b30f04de37e\") " pod="openstack/dnsmasq-dns-784b55c5d9-5wk8n" Mar 20 15:58:04 crc kubenswrapper[4730]: I0320 15:58:04.099420 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67854402-4e0e-4ebe-b9d4-700669827780-kube-api-access-bvhrt" (OuterVolumeSpecName: "kube-api-access-bvhrt") pod "67854402-4e0e-4ebe-b9d4-700669827780" (UID: "67854402-4e0e-4ebe-b9d4-700669827780"). InnerVolumeSpecName "kube-api-access-bvhrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:58:04 crc kubenswrapper[4730]: I0320 15:58:04.111833 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqsxv\" (UniqueName: \"kubernetes.io/projected/dcd9cd10-c13e-446b-9dad-8b30f04de37e-kube-api-access-rqsxv\") pod \"dnsmasq-dns-784b55c5d9-5wk8n\" (UID: \"dcd9cd10-c13e-446b-9dad-8b30f04de37e\") " pod="openstack/dnsmasq-dns-784b55c5d9-5wk8n" Mar 20 15:58:04 crc kubenswrapper[4730]: I0320 15:58:04.192926 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxh4d\" (UniqueName: \"kubernetes.io/projected/ae05e358-cb9f-4772-a644-8ec5131415eb-kube-api-access-hxh4d\") pod \"dnsmasq-dns-bf56b5889-hwkdn\" (UID: \"ae05e358-cb9f-4772-a644-8ec5131415eb\") " pod="openstack/dnsmasq-dns-bf56b5889-hwkdn" Mar 20 15:58:04 crc kubenswrapper[4730]: I0320 15:58:04.193219 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae05e358-cb9f-4772-a644-8ec5131415eb-dns-svc\") pod \"dnsmasq-dns-bf56b5889-hwkdn\" (UID: \"ae05e358-cb9f-4772-a644-8ec5131415eb\") " pod="openstack/dnsmasq-dns-bf56b5889-hwkdn" Mar 20 15:58:04 crc kubenswrapper[4730]: I0320 15:58:04.193365 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae05e358-cb9f-4772-a644-8ec5131415eb-config\") pod \"dnsmasq-dns-bf56b5889-hwkdn\" (UID: \"ae05e358-cb9f-4772-a644-8ec5131415eb\") " pod="openstack/dnsmasq-dns-bf56b5889-hwkdn" Mar 20 15:58:04 crc kubenswrapper[4730]: I0320 15:58:04.193476 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvhrt\" (UniqueName: \"kubernetes.io/projected/67854402-4e0e-4ebe-b9d4-700669827780-kube-api-access-bvhrt\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:04 crc kubenswrapper[4730]: I0320 15:58:04.194224 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae05e358-cb9f-4772-a644-8ec5131415eb-config\") pod \"dnsmasq-dns-bf56b5889-hwkdn\" (UID: \"ae05e358-cb9f-4772-a644-8ec5131415eb\") " pod="openstack/dnsmasq-dns-bf56b5889-hwkdn" Mar 20 15:58:04 crc kubenswrapper[4730]: I0320 15:58:04.194780 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae05e358-cb9f-4772-a644-8ec5131415eb-dns-svc\") pod \"dnsmasq-dns-bf56b5889-hwkdn\" (UID: \"ae05e358-cb9f-4772-a644-8ec5131415eb\") " pod="openstack/dnsmasq-dns-bf56b5889-hwkdn" Mar 20 15:58:04 crc kubenswrapper[4730]: I0320 15:58:04.227732 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxh4d\" (UniqueName: \"kubernetes.io/projected/ae05e358-cb9f-4772-a644-8ec5131415eb-kube-api-access-hxh4d\") pod \"dnsmasq-dns-bf56b5889-hwkdn\" (UID: \"ae05e358-cb9f-4772-a644-8ec5131415eb\") " pod="openstack/dnsmasq-dns-bf56b5889-hwkdn" Mar 20 15:58:04 crc kubenswrapper[4730]: I0320 15:58:04.232157 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784b55c5d9-5wk8n" Mar 20 15:58:04 crc kubenswrapper[4730]: I0320 15:58:04.253894 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bf56b5889-hwkdn" Mar 20 15:58:04 crc kubenswrapper[4730]: I0320 15:58:04.631112 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567038-qvgqb" event={"ID":"67854402-4e0e-4ebe-b9d4-700669827780","Type":"ContainerDied","Data":"9440066ffdb65a07be2e47ff1767e0405bb8e3440041b92f4bf330c3197708ed"} Mar 20 15:58:04 crc kubenswrapper[4730]: I0320 15:58:04.631157 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9440066ffdb65a07be2e47ff1767e0405bb8e3440041b92f4bf330c3197708ed" Mar 20 15:58:04 crc kubenswrapper[4730]: I0320 15:58:04.631137 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567038-qvgqb" Mar 20 15:58:04 crc kubenswrapper[4730]: I0320 15:58:04.705355 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-784b55c5d9-5wk8n"] Mar 20 15:58:04 crc kubenswrapper[4730]: I0320 15:58:04.741930 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bf56b5889-hwkdn"] Mar 20 15:58:04 crc kubenswrapper[4730]: W0320 15:58:04.752956 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae05e358_cb9f_4772_a644_8ec5131415eb.slice/crio-0f15db04a290faf8bce634d61170874353004570e3569dc452521a6a7ac01ec4 WatchSource:0}: Error finding container 0f15db04a290faf8bce634d61170874353004570e3569dc452521a6a7ac01ec4: Status 404 returned error can't find the container with id 0f15db04a290faf8bce634d61170874353004570e3569dc452521a6a7ac01ec4 Mar 20 15:58:04 crc kubenswrapper[4730]: I0320 15:58:04.992097 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567032-mfvhl"] Mar 20 15:58:05 crc kubenswrapper[4730]: I0320 15:58:05.000050 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567032-mfvhl"] Mar 20 15:58:05 crc kubenswrapper[4730]: I0320 15:58:05.542644 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76deb34d-7c3d-4510-9b0a-ac56dcca047a" path="/var/lib/kubelet/pods/76deb34d-7c3d-4510-9b0a-ac56dcca047a/volumes" Mar 20 15:58:05 crc kubenswrapper[4730]: I0320 15:58:05.640945 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784b55c5d9-5wk8n" event={"ID":"dcd9cd10-c13e-446b-9dad-8b30f04de37e","Type":"ContainerStarted","Data":"cc74e55810f8f5b64f9744e711295d16d00bc26b41f45c20f60b7913df187ae8"} Mar 20 15:58:05 crc kubenswrapper[4730]: I0320 15:58:05.643087 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bf56b5889-hwkdn" event={"ID":"ae05e358-cb9f-4772-a644-8ec5131415eb","Type":"ContainerStarted","Data":"0f15db04a290faf8bce634d61170874353004570e3569dc452521a6a7ac01ec4"} Mar 20 15:58:07 crc kubenswrapper[4730]: I0320 15:58:07.671234 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-784b55c5d9-5wk8n"] Mar 20 15:58:07 crc kubenswrapper[4730]: I0320 15:58:07.690800 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68d64f5f8f-ncpth"] Mar 20 15:58:07 crc kubenswrapper[4730]: I0320 15:58:07.692044 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d64f5f8f-ncpth" Mar 20 15:58:07 crc kubenswrapper[4730]: I0320 15:58:07.703272 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d64f5f8f-ncpth"] Mar 20 15:58:07 crc kubenswrapper[4730]: I0320 15:58:07.843728 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrdjk\" (UniqueName: \"kubernetes.io/projected/7a5c1062-c366-4407-a395-cc3ad80ed296-kube-api-access-mrdjk\") pod \"dnsmasq-dns-68d64f5f8f-ncpth\" (UID: \"7a5c1062-c366-4407-a395-cc3ad80ed296\") " pod="openstack/dnsmasq-dns-68d64f5f8f-ncpth" Mar 20 15:58:07 crc kubenswrapper[4730]: I0320 15:58:07.843821 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a5c1062-c366-4407-a395-cc3ad80ed296-config\") pod \"dnsmasq-dns-68d64f5f8f-ncpth\" (UID: \"7a5c1062-c366-4407-a395-cc3ad80ed296\") " pod="openstack/dnsmasq-dns-68d64f5f8f-ncpth" Mar 20 15:58:07 crc kubenswrapper[4730]: I0320 15:58:07.843943 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a5c1062-c366-4407-a395-cc3ad80ed296-dns-svc\") pod \"dnsmasq-dns-68d64f5f8f-ncpth\" (UID: \"7a5c1062-c366-4407-a395-cc3ad80ed296\") " pod="openstack/dnsmasq-dns-68d64f5f8f-ncpth" Mar 20 15:58:07 crc kubenswrapper[4730]: I0320 15:58:07.935694 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bf56b5889-hwkdn"] Mar 20 15:58:07 crc kubenswrapper[4730]: I0320 15:58:07.945654 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a5c1062-c366-4407-a395-cc3ad80ed296-dns-svc\") pod \"dnsmasq-dns-68d64f5f8f-ncpth\" (UID: \"7a5c1062-c366-4407-a395-cc3ad80ed296\") " pod="openstack/dnsmasq-dns-68d64f5f8f-ncpth" Mar 20 15:58:07 crc kubenswrapper[4730]: I0320 15:58:07.945742 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrdjk\" (UniqueName: \"kubernetes.io/projected/7a5c1062-c366-4407-a395-cc3ad80ed296-kube-api-access-mrdjk\") pod \"dnsmasq-dns-68d64f5f8f-ncpth\" (UID: \"7a5c1062-c366-4407-a395-cc3ad80ed296\") " pod="openstack/dnsmasq-dns-68d64f5f8f-ncpth" Mar 20 15:58:07 crc kubenswrapper[4730]: I0320 15:58:07.945807 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a5c1062-c366-4407-a395-cc3ad80ed296-config\") pod \"dnsmasq-dns-68d64f5f8f-ncpth\" (UID: \"7a5c1062-c366-4407-a395-cc3ad80ed296\") " pod="openstack/dnsmasq-dns-68d64f5f8f-ncpth" Mar 20 15:58:07 crc kubenswrapper[4730]: I0320 15:58:07.946744 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a5c1062-c366-4407-a395-cc3ad80ed296-dns-svc\") pod \"dnsmasq-dns-68d64f5f8f-ncpth\" (UID: \"7a5c1062-c366-4407-a395-cc3ad80ed296\") " pod="openstack/dnsmasq-dns-68d64f5f8f-ncpth" Mar 20 15:58:07 crc kubenswrapper[4730]: I0320 15:58:07.946996 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a5c1062-c366-4407-a395-cc3ad80ed296-config\") pod \"dnsmasq-dns-68d64f5f8f-ncpth\" (UID: \"7a5c1062-c366-4407-a395-cc3ad80ed296\") " pod="openstack/dnsmasq-dns-68d64f5f8f-ncpth" Mar 20 15:58:07 crc kubenswrapper[4730]: I0320 15:58:07.972615 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrdjk\" (UniqueName: \"kubernetes.io/projected/7a5c1062-c366-4407-a395-cc3ad80ed296-kube-api-access-mrdjk\") pod \"dnsmasq-dns-68d64f5f8f-ncpth\" (UID: \"7a5c1062-c366-4407-a395-cc3ad80ed296\") " pod="openstack/dnsmasq-dns-68d64f5f8f-ncpth" Mar 20 15:58:07 crc kubenswrapper[4730]: I0320 15:58:07.972684 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7847d45595-fnchx"] Mar 20 15:58:07 crc kubenswrapper[4730]: I0320 15:58:07.974214 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7847d45595-fnchx" Mar 20 15:58:07 crc kubenswrapper[4730]: I0320 15:58:07.979617 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7847d45595-fnchx"] Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.014747 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d64f5f8f-ncpth" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.046809 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27dbbb52-2bd1-4e24-b621-128e7c880a2b-config\") pod \"dnsmasq-dns-7847d45595-fnchx\" (UID: \"27dbbb52-2bd1-4e24-b621-128e7c880a2b\") " pod="openstack/dnsmasq-dns-7847d45595-fnchx" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.046910 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgkz6\" (UniqueName: \"kubernetes.io/projected/27dbbb52-2bd1-4e24-b621-128e7c880a2b-kube-api-access-dgkz6\") pod \"dnsmasq-dns-7847d45595-fnchx\" (UID: \"27dbbb52-2bd1-4e24-b621-128e7c880a2b\") " pod="openstack/dnsmasq-dns-7847d45595-fnchx" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.046969 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27dbbb52-2bd1-4e24-b621-128e7c880a2b-dns-svc\") pod \"dnsmasq-dns-7847d45595-fnchx\" (UID: \"27dbbb52-2bd1-4e24-b621-128e7c880a2b\") " pod="openstack/dnsmasq-dns-7847d45595-fnchx" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.148007 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27dbbb52-2bd1-4e24-b621-128e7c880a2b-config\") pod \"dnsmasq-dns-7847d45595-fnchx\" (UID: \"27dbbb52-2bd1-4e24-b621-128e7c880a2b\") " pod="openstack/dnsmasq-dns-7847d45595-fnchx" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.148112 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgkz6\" (UniqueName: \"kubernetes.io/projected/27dbbb52-2bd1-4e24-b621-128e7c880a2b-kube-api-access-dgkz6\") pod \"dnsmasq-dns-7847d45595-fnchx\" (UID: \"27dbbb52-2bd1-4e24-b621-128e7c880a2b\") " pod="openstack/dnsmasq-dns-7847d45595-fnchx" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.148208 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27dbbb52-2bd1-4e24-b621-128e7c880a2b-dns-svc\") pod \"dnsmasq-dns-7847d45595-fnchx\" (UID: \"27dbbb52-2bd1-4e24-b621-128e7c880a2b\") " pod="openstack/dnsmasq-dns-7847d45595-fnchx" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.148945 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27dbbb52-2bd1-4e24-b621-128e7c880a2b-dns-svc\") pod \"dnsmasq-dns-7847d45595-fnchx\" (UID: \"27dbbb52-2bd1-4e24-b621-128e7c880a2b\") " pod="openstack/dnsmasq-dns-7847d45595-fnchx" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.149017 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27dbbb52-2bd1-4e24-b621-128e7c880a2b-config\") pod \"dnsmasq-dns-7847d45595-fnchx\" (UID: \"27dbbb52-2bd1-4e24-b621-128e7c880a2b\") " pod="openstack/dnsmasq-dns-7847d45595-fnchx" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.176218 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgkz6\" (UniqueName: \"kubernetes.io/projected/27dbbb52-2bd1-4e24-b621-128e7c880a2b-kube-api-access-dgkz6\") pod \"dnsmasq-dns-7847d45595-fnchx\" (UID: \"27dbbb52-2bd1-4e24-b621-128e7c880a2b\") " pod="openstack/dnsmasq-dns-7847d45595-fnchx" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.254351 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d64f5f8f-ncpth"] Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.274618 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74bcc47849-2r2xb"] Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.275710 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74bcc47849-2r2xb" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.293066 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74bcc47849-2r2xb"] Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.334350 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7847d45595-fnchx" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.350544 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d94g\" (UniqueName: \"kubernetes.io/projected/a5e88dae-c3fd-456c-92c6-3bc143b5a399-kube-api-access-7d94g\") pod \"dnsmasq-dns-74bcc47849-2r2xb\" (UID: \"a5e88dae-c3fd-456c-92c6-3bc143b5a399\") " pod="openstack/dnsmasq-dns-74bcc47849-2r2xb" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.350589 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5e88dae-c3fd-456c-92c6-3bc143b5a399-config\") pod \"dnsmasq-dns-74bcc47849-2r2xb\" (UID: \"a5e88dae-c3fd-456c-92c6-3bc143b5a399\") " pod="openstack/dnsmasq-dns-74bcc47849-2r2xb" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.350644 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5e88dae-c3fd-456c-92c6-3bc143b5a399-dns-svc\") pod \"dnsmasq-dns-74bcc47849-2r2xb\" (UID: \"a5e88dae-c3fd-456c-92c6-3bc143b5a399\") " pod="openstack/dnsmasq-dns-74bcc47849-2r2xb" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.452297 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d94g\" (UniqueName: \"kubernetes.io/projected/a5e88dae-c3fd-456c-92c6-3bc143b5a399-kube-api-access-7d94g\") pod \"dnsmasq-dns-74bcc47849-2r2xb\" (UID: \"a5e88dae-c3fd-456c-92c6-3bc143b5a399\") " pod="openstack/dnsmasq-dns-74bcc47849-2r2xb" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.452617 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5e88dae-c3fd-456c-92c6-3bc143b5a399-config\") pod \"dnsmasq-dns-74bcc47849-2r2xb\" (UID: \"a5e88dae-c3fd-456c-92c6-3bc143b5a399\") " pod="openstack/dnsmasq-dns-74bcc47849-2r2xb" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.452738 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5e88dae-c3fd-456c-92c6-3bc143b5a399-dns-svc\") pod \"dnsmasq-dns-74bcc47849-2r2xb\" (UID: \"a5e88dae-c3fd-456c-92c6-3bc143b5a399\") " pod="openstack/dnsmasq-dns-74bcc47849-2r2xb" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.453864 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5e88dae-c3fd-456c-92c6-3bc143b5a399-dns-svc\") pod \"dnsmasq-dns-74bcc47849-2r2xb\" (UID: \"a5e88dae-c3fd-456c-92c6-3bc143b5a399\") " pod="openstack/dnsmasq-dns-74bcc47849-2r2xb" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.453928 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5e88dae-c3fd-456c-92c6-3bc143b5a399-config\") pod \"dnsmasq-dns-74bcc47849-2r2xb\" (UID: \"a5e88dae-c3fd-456c-92c6-3bc143b5a399\") " pod="openstack/dnsmasq-dns-74bcc47849-2r2xb" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.470786 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d94g\" (UniqueName: \"kubernetes.io/projected/a5e88dae-c3fd-456c-92c6-3bc143b5a399-kube-api-access-7d94g\") pod \"dnsmasq-dns-74bcc47849-2r2xb\" (UID: \"a5e88dae-c3fd-456c-92c6-3bc143b5a399\") " pod="openstack/dnsmasq-dns-74bcc47849-2r2xb" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.592697 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74bcc47849-2r2xb" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.832722 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.838569 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.841638 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.841977 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.843959 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-rwcvj" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.844493 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.845617 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.846091 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.846308 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.853688 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.963083 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.963151 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.963265 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4jnf\" (UniqueName: \"kubernetes.io/projected/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-kube-api-access-c4jnf\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.963491 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.963563 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-config-data\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.963602 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.963632 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.963752 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.963814 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.963851 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:08 crc kubenswrapper[4730]: I0320 15:58:08.963898 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.065203 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4jnf\" (UniqueName: \"kubernetes.io/projected/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-kube-api-access-c4jnf\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.065281 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.065313 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-config-data\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.065344 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.065367 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.065401 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.065428 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.065448 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.065476 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.065510 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.065544 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.066389 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.066470 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.067087 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.067367 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-config-data\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.067406 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.068530 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.071925 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.072226 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.072437 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.074888 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.087152 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4jnf\" (UniqueName: \"kubernetes.io/projected/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-kube-api-access-c4jnf\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.089484 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " pod="openstack/rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.119806 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.121227 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.126726 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-config-data" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.127740 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-notifications-rabbitmq-svc" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.127906 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-server-dockercfg-sl7kk" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.170327 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-erlang-cookie" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.170655 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-default-user" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.170759 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-plugins-conf" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.170803 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-server-conf" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.171069 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.179955 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.273151 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/df9ca02d-e20f-4f55-ba14-92b91812afb6-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.273199 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/df9ca02d-e20f-4f55-ba14-92b91812afb6-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.273229 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/df9ca02d-e20f-4f55-ba14-92b91812afb6-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.273283 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/df9ca02d-e20f-4f55-ba14-92b91812afb6-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.273320 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/df9ca02d-e20f-4f55-ba14-92b91812afb6-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.273574 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df9ca02d-e20f-4f55-ba14-92b91812afb6-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.273658 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj8vn\" (UniqueName: \"kubernetes.io/projected/df9ca02d-e20f-4f55-ba14-92b91812afb6-kube-api-access-rj8vn\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.273821 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.273913 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/df9ca02d-e20f-4f55-ba14-92b91812afb6-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.273969 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/df9ca02d-e20f-4f55-ba14-92b91812afb6-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.274053 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/df9ca02d-e20f-4f55-ba14-92b91812afb6-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.375576 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df9ca02d-e20f-4f55-ba14-92b91812afb6-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.375650 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj8vn\" (UniqueName: \"kubernetes.io/projected/df9ca02d-e20f-4f55-ba14-92b91812afb6-kube-api-access-rj8vn\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.376310 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.376508 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.377570 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/df9ca02d-e20f-4f55-ba14-92b91812afb6-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.377661 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/df9ca02d-e20f-4f55-ba14-92b91812afb6-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.377806 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/df9ca02d-e20f-4f55-ba14-92b91812afb6-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.377854 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/df9ca02d-e20f-4f55-ba14-92b91812afb6-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.377901 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/df9ca02d-e20f-4f55-ba14-92b91812afb6-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.378071 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/df9ca02d-e20f-4f55-ba14-92b91812afb6-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.378118 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/df9ca02d-e20f-4f55-ba14-92b91812afb6-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.378135 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/df9ca02d-e20f-4f55-ba14-92b91812afb6-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.379071 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/df9ca02d-e20f-4f55-ba14-92b91812afb6-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.379810 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/df9ca02d-e20f-4f55-ba14-92b91812afb6-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.379863 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df9ca02d-e20f-4f55-ba14-92b91812afb6-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.380201 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/df9ca02d-e20f-4f55-ba14-92b91812afb6-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.380536 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/df9ca02d-e20f-4f55-ba14-92b91812afb6-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.382390 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/df9ca02d-e20f-4f55-ba14-92b91812afb6-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.383060 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/df9ca02d-e20f-4f55-ba14-92b91812afb6-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.390140 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/df9ca02d-e20f-4f55-ba14-92b91812afb6-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.390790 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/df9ca02d-e20f-4f55-ba14-92b91812afb6-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.392163 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj8vn\" (UniqueName: \"kubernetes.io/projected/df9ca02d-e20f-4f55-ba14-92b91812afb6-kube-api-access-rj8vn\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.417559 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"df9ca02d-e20f-4f55-ba14-92b91812afb6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.434004 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.438255 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.444501 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.444781 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.444920 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.445271 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.445493 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.445627 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dlhcb" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.446344 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.469939 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.485671 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.583584 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8043f69c-832c-4afa-a9b9-211507664805-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.583838 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8043f69c-832c-4afa-a9b9-211507664805-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.583921 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.584011 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8043f69c-832c-4afa-a9b9-211507664805-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.584118 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8043f69c-832c-4afa-a9b9-211507664805-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.584203 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8043f69c-832c-4afa-a9b9-211507664805-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.584298 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8043f69c-832c-4afa-a9b9-211507664805-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.584372 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8043f69c-832c-4afa-a9b9-211507664805-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.584483 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8043f69c-832c-4afa-a9b9-211507664805-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.584599 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8043f69c-832c-4afa-a9b9-211507664805-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.584688 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vth2k\" (UniqueName: \"kubernetes.io/projected/8043f69c-832c-4afa-a9b9-211507664805-kube-api-access-vth2k\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.686909 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vth2k\" (UniqueName: \"kubernetes.io/projected/8043f69c-832c-4afa-a9b9-211507664805-kube-api-access-vth2k\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.686967 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8043f69c-832c-4afa-a9b9-211507664805-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.686991 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8043f69c-832c-4afa-a9b9-211507664805-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.687042 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.687072 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8043f69c-832c-4afa-a9b9-211507664805-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.687089 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8043f69c-832c-4afa-a9b9-211507664805-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.687112 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8043f69c-832c-4afa-a9b9-211507664805-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.687135 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8043f69c-832c-4afa-a9b9-211507664805-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.687152 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8043f69c-832c-4afa-a9b9-211507664805-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.687189 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8043f69c-832c-4afa-a9b9-211507664805-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.687223 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8043f69c-832c-4afa-a9b9-211507664805-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.688172 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8043f69c-832c-4afa-a9b9-211507664805-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.688654 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8043f69c-832c-4afa-a9b9-211507664805-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.688919 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8043f69c-832c-4afa-a9b9-211507664805-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.689551 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8043f69c-832c-4afa-a9b9-211507664805-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.689662 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8043f69c-832c-4afa-a9b9-211507664805-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.690219 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.694198 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8043f69c-832c-4afa-a9b9-211507664805-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.696405 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8043f69c-832c-4afa-a9b9-211507664805-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.705803 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8043f69c-832c-4afa-a9b9-211507664805-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.712945 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vth2k\" (UniqueName: \"kubernetes.io/projected/8043f69c-832c-4afa-a9b9-211507664805-kube-api-access-vth2k\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.724879 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8043f69c-832c-4afa-a9b9-211507664805-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.729457 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:09 crc kubenswrapper[4730]: I0320 15:58:09.782450 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:58:10 crc kubenswrapper[4730]: I0320 15:58:10.802782 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 20 15:58:10 crc kubenswrapper[4730]: I0320 15:58:10.806998 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 20 15:58:10 crc kubenswrapper[4730]: I0320 15:58:10.811440 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 20 15:58:10 crc kubenswrapper[4730]: I0320 15:58:10.812257 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-8d9gs" Mar 20 15:58:10 crc kubenswrapper[4730]: I0320 15:58:10.812402 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 20 15:58:10 crc kubenswrapper[4730]: I0320 15:58:10.812742 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 20 15:58:10 crc kubenswrapper[4730]: I0320 15:58:10.818808 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 20 15:58:10 crc kubenswrapper[4730]: I0320 15:58:10.824134 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 20 15:58:10 crc kubenswrapper[4730]: I0320 15:58:10.910521 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9glhm\" (UniqueName: \"kubernetes.io/projected/6abf778f-200f-4d48-97b6-08a638b4efa2-kube-api-access-9glhm\") pod \"openstack-galera-0\" (UID: \"6abf778f-200f-4d48-97b6-08a638b4efa2\") " pod="openstack/openstack-galera-0" Mar 20 15:58:10 crc kubenswrapper[4730]: I0320 15:58:10.910631 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6abf778f-200f-4d48-97b6-08a638b4efa2-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6abf778f-200f-4d48-97b6-08a638b4efa2\") " pod="openstack/openstack-galera-0" Mar 20 15:58:10 crc kubenswrapper[4730]: I0320 15:58:10.910680 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6abf778f-200f-4d48-97b6-08a638b4efa2-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6abf778f-200f-4d48-97b6-08a638b4efa2\") " pod="openstack/openstack-galera-0" Mar 20 15:58:10 crc kubenswrapper[4730]: I0320 15:58:10.910708 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6abf778f-200f-4d48-97b6-08a638b4efa2-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6abf778f-200f-4d48-97b6-08a638b4efa2\") " pod="openstack/openstack-galera-0" Mar 20 15:58:10 crc kubenswrapper[4730]: I0320 15:58:10.910761 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6abf778f-200f-4d48-97b6-08a638b4efa2-config-data-default\") pod \"openstack-galera-0\" (UID: \"6abf778f-200f-4d48-97b6-08a638b4efa2\") " pod="openstack/openstack-galera-0" Mar 20 15:58:10 crc kubenswrapper[4730]: I0320 15:58:10.910786 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6abf778f-200f-4d48-97b6-08a638b4efa2-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6abf778f-200f-4d48-97b6-08a638b4efa2\") " pod="openstack/openstack-galera-0" Mar 20 15:58:10 crc kubenswrapper[4730]: I0320 15:58:10.910824 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6abf778f-200f-4d48-97b6-08a638b4efa2-kolla-config\") pod \"openstack-galera-0\" (UID: \"6abf778f-200f-4d48-97b6-08a638b4efa2\") " pod="openstack/openstack-galera-0" Mar 20 15:58:10 crc kubenswrapper[4730]: I0320 15:58:10.910851 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"6abf778f-200f-4d48-97b6-08a638b4efa2\") " pod="openstack/openstack-galera-0" Mar 20 15:58:11 crc kubenswrapper[4730]: I0320 15:58:11.012050 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6abf778f-200f-4d48-97b6-08a638b4efa2-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6abf778f-200f-4d48-97b6-08a638b4efa2\") " pod="openstack/openstack-galera-0" Mar 20 15:58:11 crc kubenswrapper[4730]: I0320 15:58:11.012097 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6abf778f-200f-4d48-97b6-08a638b4efa2-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6abf778f-200f-4d48-97b6-08a638b4efa2\") " pod="openstack/openstack-galera-0" Mar 20 15:58:11 crc kubenswrapper[4730]: I0320 15:58:11.012130 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6abf778f-200f-4d48-97b6-08a638b4efa2-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6abf778f-200f-4d48-97b6-08a638b4efa2\") " pod="openstack/openstack-galera-0" Mar 20 15:58:11 crc kubenswrapper[4730]: I0320 15:58:11.012161 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6abf778f-200f-4d48-97b6-08a638b4efa2-config-data-default\") pod \"openstack-galera-0\" (UID: \"6abf778f-200f-4d48-97b6-08a638b4efa2\") " pod="openstack/openstack-galera-0" Mar 20 15:58:11 crc kubenswrapper[4730]: I0320 15:58:11.012181 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6abf778f-200f-4d48-97b6-08a638b4efa2-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6abf778f-200f-4d48-97b6-08a638b4efa2\") " pod="openstack/openstack-galera-0" Mar 20 15:58:11 crc kubenswrapper[4730]: I0320 15:58:11.012199 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6abf778f-200f-4d48-97b6-08a638b4efa2-kolla-config\") pod \"openstack-galera-0\" (UID: \"6abf778f-200f-4d48-97b6-08a638b4efa2\") " pod="openstack/openstack-galera-0" Mar 20 15:58:11 crc kubenswrapper[4730]: I0320 15:58:11.012221 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"6abf778f-200f-4d48-97b6-08a638b4efa2\") " pod="openstack/openstack-galera-0" Mar 20 15:58:11 crc kubenswrapper[4730]: I0320 15:58:11.012286 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9glhm\" (UniqueName: \"kubernetes.io/projected/6abf778f-200f-4d48-97b6-08a638b4efa2-kube-api-access-9glhm\") pod \"openstack-galera-0\" (UID: \"6abf778f-200f-4d48-97b6-08a638b4efa2\") " pod="openstack/openstack-galera-0" Mar 20 15:58:11 crc kubenswrapper[4730]: I0320 15:58:11.012867 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"6abf778f-200f-4d48-97b6-08a638b4efa2\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-galera-0" Mar 20 15:58:11 crc kubenswrapper[4730]: I0320 15:58:11.013315 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6abf778f-200f-4d48-97b6-08a638b4efa2-kolla-config\") pod \"openstack-galera-0\" (UID: \"6abf778f-200f-4d48-97b6-08a638b4efa2\") " pod="openstack/openstack-galera-0" Mar 20 15:58:11 crc kubenswrapper[4730]: I0320 15:58:11.013510 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6abf778f-200f-4d48-97b6-08a638b4efa2-config-data-default\") pod \"openstack-galera-0\" (UID: \"6abf778f-200f-4d48-97b6-08a638b4efa2\") " pod="openstack/openstack-galera-0" Mar 20 15:58:11 crc kubenswrapper[4730]: I0320 15:58:11.013958 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6abf778f-200f-4d48-97b6-08a638b4efa2-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6abf778f-200f-4d48-97b6-08a638b4efa2\") " pod="openstack/openstack-galera-0" Mar 20 15:58:11 crc kubenswrapper[4730]: I0320 15:58:11.014014 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6abf778f-200f-4d48-97b6-08a638b4efa2-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6abf778f-200f-4d48-97b6-08a638b4efa2\") " pod="openstack/openstack-galera-0" Mar 20 15:58:11 crc kubenswrapper[4730]: I0320 15:58:11.018154 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6abf778f-200f-4d48-97b6-08a638b4efa2-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6abf778f-200f-4d48-97b6-08a638b4efa2\") " pod="openstack/openstack-galera-0" Mar 20 15:58:11 crc kubenswrapper[4730]: I0320 15:58:11.030963 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6abf778f-200f-4d48-97b6-08a638b4efa2-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6abf778f-200f-4d48-97b6-08a638b4efa2\") " pod="openstack/openstack-galera-0" Mar 20 15:58:11 crc kubenswrapper[4730]: I0320 15:58:11.035873 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9glhm\" (UniqueName: \"kubernetes.io/projected/6abf778f-200f-4d48-97b6-08a638b4efa2-kube-api-access-9glhm\") pod \"openstack-galera-0\" (UID: \"6abf778f-200f-4d48-97b6-08a638b4efa2\") " pod="openstack/openstack-galera-0" Mar 20 15:58:11 crc kubenswrapper[4730]: I0320 15:58:11.040825 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"6abf778f-200f-4d48-97b6-08a638b4efa2\") " pod="openstack/openstack-galera-0" Mar 20 15:58:11 crc kubenswrapper[4730]: I0320 15:58:11.129944 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.232598 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.234161 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.237547 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-ghtpc" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.238175 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.238205 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.238337 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.258718 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.332354 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/899bd9ae-9354-4e70-ad37-b438a5a33a24-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"899bd9ae-9354-4e70-ad37-b438a5a33a24\") " pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.332439 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"899bd9ae-9354-4e70-ad37-b438a5a33a24\") " pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.332474 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/899bd9ae-9354-4e70-ad37-b438a5a33a24-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"899bd9ae-9354-4e70-ad37-b438a5a33a24\") " pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.332542 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/899bd9ae-9354-4e70-ad37-b438a5a33a24-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"899bd9ae-9354-4e70-ad37-b438a5a33a24\") " pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.332571 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9x25\" (UniqueName: \"kubernetes.io/projected/899bd9ae-9354-4e70-ad37-b438a5a33a24-kube-api-access-f9x25\") pod \"openstack-cell1-galera-0\" (UID: \"899bd9ae-9354-4e70-ad37-b438a5a33a24\") " pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.332598 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/899bd9ae-9354-4e70-ad37-b438a5a33a24-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"899bd9ae-9354-4e70-ad37-b438a5a33a24\") " pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.332631 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899bd9ae-9354-4e70-ad37-b438a5a33a24-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"899bd9ae-9354-4e70-ad37-b438a5a33a24\") " pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.332670 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/899bd9ae-9354-4e70-ad37-b438a5a33a24-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"899bd9ae-9354-4e70-ad37-b438a5a33a24\") " pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.349282 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.350439 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.352468 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.352667 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-8d8cg" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.352843 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.354205 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.433933 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/899bd9ae-9354-4e70-ad37-b438a5a33a24-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"899bd9ae-9354-4e70-ad37-b438a5a33a24\") " pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.434017 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/899bd9ae-9354-4e70-ad37-b438a5a33a24-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"899bd9ae-9354-4e70-ad37-b438a5a33a24\") " pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.434067 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84bbdebb-43de-41d6-82d4-71b0948c25f8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"84bbdebb-43de-41d6-82d4-71b0948c25f8\") " pod="openstack/memcached-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.434093 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"899bd9ae-9354-4e70-ad37-b438a5a33a24\") " pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.434122 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/899bd9ae-9354-4e70-ad37-b438a5a33a24-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"899bd9ae-9354-4e70-ad37-b438a5a33a24\") " pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.434143 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vggm6\" (UniqueName: \"kubernetes.io/projected/84bbdebb-43de-41d6-82d4-71b0948c25f8-kube-api-access-vggm6\") pod \"memcached-0\" (UID: \"84bbdebb-43de-41d6-82d4-71b0948c25f8\") " pod="openstack/memcached-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.434177 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/84bbdebb-43de-41d6-82d4-71b0948c25f8-kolla-config\") pod \"memcached-0\" (UID: \"84bbdebb-43de-41d6-82d4-71b0948c25f8\") " pod="openstack/memcached-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.434201 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/899bd9ae-9354-4e70-ad37-b438a5a33a24-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"899bd9ae-9354-4e70-ad37-b438a5a33a24\") " pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.434226 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9x25\" (UniqueName: \"kubernetes.io/projected/899bd9ae-9354-4e70-ad37-b438a5a33a24-kube-api-access-f9x25\") pod \"openstack-cell1-galera-0\" (UID: \"899bd9ae-9354-4e70-ad37-b438a5a33a24\") " pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.434346 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"899bd9ae-9354-4e70-ad37-b438a5a33a24\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.435186 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/899bd9ae-9354-4e70-ad37-b438a5a33a24-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"899bd9ae-9354-4e70-ad37-b438a5a33a24\") " pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.435304 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/899bd9ae-9354-4e70-ad37-b438a5a33a24-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"899bd9ae-9354-4e70-ad37-b438a5a33a24\") " pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.435533 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/899bd9ae-9354-4e70-ad37-b438a5a33a24-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"899bd9ae-9354-4e70-ad37-b438a5a33a24\") " pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.436576 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/899bd9ae-9354-4e70-ad37-b438a5a33a24-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"899bd9ae-9354-4e70-ad37-b438a5a33a24\") " pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.436656 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899bd9ae-9354-4e70-ad37-b438a5a33a24-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"899bd9ae-9354-4e70-ad37-b438a5a33a24\") " pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.436988 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/899bd9ae-9354-4e70-ad37-b438a5a33a24-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"899bd9ae-9354-4e70-ad37-b438a5a33a24\") " pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.437605 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/84bbdebb-43de-41d6-82d4-71b0948c25f8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"84bbdebb-43de-41d6-82d4-71b0948c25f8\") " pod="openstack/memcached-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.437654 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/84bbdebb-43de-41d6-82d4-71b0948c25f8-config-data\") pod \"memcached-0\" (UID: \"84bbdebb-43de-41d6-82d4-71b0948c25f8\") " pod="openstack/memcached-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.440292 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899bd9ae-9354-4e70-ad37-b438a5a33a24-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"899bd9ae-9354-4e70-ad37-b438a5a33a24\") " pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.450516 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/899bd9ae-9354-4e70-ad37-b438a5a33a24-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"899bd9ae-9354-4e70-ad37-b438a5a33a24\") " pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.454153 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9x25\" (UniqueName: \"kubernetes.io/projected/899bd9ae-9354-4e70-ad37-b438a5a33a24-kube-api-access-f9x25\") pod \"openstack-cell1-galera-0\" (UID: \"899bd9ae-9354-4e70-ad37-b438a5a33a24\") " pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.468904 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"899bd9ae-9354-4e70-ad37-b438a5a33a24\") " pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.539008 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vggm6\" (UniqueName: \"kubernetes.io/projected/84bbdebb-43de-41d6-82d4-71b0948c25f8-kube-api-access-vggm6\") pod \"memcached-0\" (UID: \"84bbdebb-43de-41d6-82d4-71b0948c25f8\") " pod="openstack/memcached-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.539056 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/84bbdebb-43de-41d6-82d4-71b0948c25f8-kolla-config\") pod \"memcached-0\" (UID: \"84bbdebb-43de-41d6-82d4-71b0948c25f8\") " pod="openstack/memcached-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.539099 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/84bbdebb-43de-41d6-82d4-71b0948c25f8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"84bbdebb-43de-41d6-82d4-71b0948c25f8\") " pod="openstack/memcached-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.539124 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/84bbdebb-43de-41d6-82d4-71b0948c25f8-config-data\") pod \"memcached-0\" (UID: \"84bbdebb-43de-41d6-82d4-71b0948c25f8\") " pod="openstack/memcached-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.539179 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84bbdebb-43de-41d6-82d4-71b0948c25f8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"84bbdebb-43de-41d6-82d4-71b0948c25f8\") " pod="openstack/memcached-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.540073 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/84bbdebb-43de-41d6-82d4-71b0948c25f8-config-data\") pod \"memcached-0\" (UID: \"84bbdebb-43de-41d6-82d4-71b0948c25f8\") " pod="openstack/memcached-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.540888 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/84bbdebb-43de-41d6-82d4-71b0948c25f8-kolla-config\") pod \"memcached-0\" (UID: \"84bbdebb-43de-41d6-82d4-71b0948c25f8\") " pod="openstack/memcached-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.542326 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84bbdebb-43de-41d6-82d4-71b0948c25f8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"84bbdebb-43de-41d6-82d4-71b0948c25f8\") " pod="openstack/memcached-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.542923 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/84bbdebb-43de-41d6-82d4-71b0948c25f8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"84bbdebb-43de-41d6-82d4-71b0948c25f8\") " pod="openstack/memcached-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.566015 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vggm6\" (UniqueName: \"kubernetes.io/projected/84bbdebb-43de-41d6-82d4-71b0948c25f8-kube-api-access-vggm6\") pod \"memcached-0\" (UID: \"84bbdebb-43de-41d6-82d4-71b0948c25f8\") " pod="openstack/memcached-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.569083 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.666263 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.880450 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.880496 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.880531 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.881557 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5a28eadd1ac2eb334876364a020c16296d471cf45645c126a154825ac93c80d5"} pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 15:58:12 crc kubenswrapper[4730]: I0320 15:58:12.881612 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" containerID="cri-o://5a28eadd1ac2eb334876364a020c16296d471cf45645c126a154825ac93c80d5" gracePeriod=600 Mar 20 15:58:13 crc kubenswrapper[4730]: I0320 15:58:13.733237 4730 generic.go:334] "Generic (PLEG): container finished" podID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerID="5a28eadd1ac2eb334876364a020c16296d471cf45645c126a154825ac93c80d5" exitCode=0 Mar 20 15:58:13 crc kubenswrapper[4730]: I0320 15:58:13.733282 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerDied","Data":"5a28eadd1ac2eb334876364a020c16296d471cf45645c126a154825ac93c80d5"} Mar 20 15:58:13 crc kubenswrapper[4730]: I0320 15:58:13.733685 4730 scope.go:117] "RemoveContainer" containerID="4969adb306e949f48cbf48ac9e1452830c3458afd1750aa781060e2cc0952393" Mar 20 15:58:14 crc kubenswrapper[4730]: I0320 15:58:14.891657 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 15:58:14 crc kubenswrapper[4730]: I0320 15:58:14.893158 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 15:58:14 crc kubenswrapper[4730]: I0320 15:58:14.896986 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-wwqfb" Mar 20 15:58:14 crc kubenswrapper[4730]: I0320 15:58:14.914002 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 15:58:14 crc kubenswrapper[4730]: I0320 15:58:14.994755 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6bgz\" (UniqueName: \"kubernetes.io/projected/4938ac0e-1226-4f20-8f23-763b62b863c4-kube-api-access-d6bgz\") pod \"kube-state-metrics-0\" (UID: \"4938ac0e-1226-4f20-8f23-763b62b863c4\") " pod="openstack/kube-state-metrics-0" Mar 20 15:58:15 crc kubenswrapper[4730]: I0320 15:58:15.095849 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6bgz\" (UniqueName: \"kubernetes.io/projected/4938ac0e-1226-4f20-8f23-763b62b863c4-kube-api-access-d6bgz\") pod \"kube-state-metrics-0\" (UID: \"4938ac0e-1226-4f20-8f23-763b62b863c4\") " pod="openstack/kube-state-metrics-0" Mar 20 15:58:15 crc kubenswrapper[4730]: I0320 15:58:15.124428 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6bgz\" (UniqueName: \"kubernetes.io/projected/4938ac0e-1226-4f20-8f23-763b62b863c4-kube-api-access-d6bgz\") pod \"kube-state-metrics-0\" (UID: \"4938ac0e-1226-4f20-8f23-763b62b863c4\") " pod="openstack/kube-state-metrics-0" Mar 20 15:58:15 crc kubenswrapper[4730]: I0320 15:58:15.212575 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.361040 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.363565 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.365610 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-2q5k6" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.365876 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.366008 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.366208 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.365890 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.365931 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.372579 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.373926 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.387044 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.514695 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fdd3845a-3723-438f-aa58-606451baed6c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.514739 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fdd3845a-3723-438f-aa58-606451baed6c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.514924 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fdd3845a-3723-438f-aa58-606451baed6c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.514999 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.515131 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fdd3845a-3723-438f-aa58-606451baed6c-config\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.515160 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fdd3845a-3723-438f-aa58-606451baed6c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.515334 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxrks\" (UniqueName: \"kubernetes.io/projected/fdd3845a-3723-438f-aa58-606451baed6c-kube-api-access-pxrks\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.515496 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fdd3845a-3723-438f-aa58-606451baed6c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.515612 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fdd3845a-3723-438f-aa58-606451baed6c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.515696 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fdd3845a-3723-438f-aa58-606451baed6c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.617293 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fdd3845a-3723-438f-aa58-606451baed6c-config\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.617332 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fdd3845a-3723-438f-aa58-606451baed6c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.617364 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxrks\" (UniqueName: \"kubernetes.io/projected/fdd3845a-3723-438f-aa58-606451baed6c-kube-api-access-pxrks\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.619013 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fdd3845a-3723-438f-aa58-606451baed6c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.622743 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fdd3845a-3723-438f-aa58-606451baed6c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.623723 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fdd3845a-3723-438f-aa58-606451baed6c-config\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.625911 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fdd3845a-3723-438f-aa58-606451baed6c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.630411 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fdd3845a-3723-438f-aa58-606451baed6c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.630493 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fdd3845a-3723-438f-aa58-606451baed6c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.630560 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fdd3845a-3723-438f-aa58-606451baed6c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.630598 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fdd3845a-3723-438f-aa58-606451baed6c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.630645 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fdd3845a-3723-438f-aa58-606451baed6c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.630692 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.631362 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fdd3845a-3723-438f-aa58-606451baed6c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.632231 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fdd3845a-3723-438f-aa58-606451baed6c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.635212 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fdd3845a-3723-438f-aa58-606451baed6c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.636072 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fdd3845a-3723-438f-aa58-606451baed6c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.637664 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxrks\" (UniqueName: \"kubernetes.io/projected/fdd3845a-3723-438f-aa58-606451baed6c-kube-api-access-pxrks\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.639617 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fdd3845a-3723-438f-aa58-606451baed6c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.640404 4730 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.640431 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6c50c5c57c27fdb24da1fcbf3a7504c7bda45f4dc15a5678e0deb708aa433733/globalmount\"" pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.686637 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\") pod \"prometheus-metric-storage-0\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:16 crc kubenswrapper[4730]: I0320 15:58:16.723383 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.476716 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.478210 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.483569 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.483701 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.483751 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-klswc" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.483701 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.485043 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.501002 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.563122 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caa1db28-afc0-4abc-aa80-84cccb3d8412-config\") pod \"ovsdbserver-nb-0\" (UID: \"caa1db28-afc0-4abc-aa80-84cccb3d8412\") " pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.563187 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t79w\" (UniqueName: \"kubernetes.io/projected/caa1db28-afc0-4abc-aa80-84cccb3d8412-kube-api-access-6t79w\") pod \"ovsdbserver-nb-0\" (UID: \"caa1db28-afc0-4abc-aa80-84cccb3d8412\") " pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.563220 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"caa1db28-afc0-4abc-aa80-84cccb3d8412\") " pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.563240 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/caa1db28-afc0-4abc-aa80-84cccb3d8412-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"caa1db28-afc0-4abc-aa80-84cccb3d8412\") " pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.563352 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/caa1db28-afc0-4abc-aa80-84cccb3d8412-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"caa1db28-afc0-4abc-aa80-84cccb3d8412\") " pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.563458 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caa1db28-afc0-4abc-aa80-84cccb3d8412-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"caa1db28-afc0-4abc-aa80-84cccb3d8412\") " pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.564632 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/caa1db28-afc0-4abc-aa80-84cccb3d8412-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"caa1db28-afc0-4abc-aa80-84cccb3d8412\") " pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.564710 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/caa1db28-afc0-4abc-aa80-84cccb3d8412-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"caa1db28-afc0-4abc-aa80-84cccb3d8412\") " pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.666171 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/caa1db28-afc0-4abc-aa80-84cccb3d8412-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"caa1db28-afc0-4abc-aa80-84cccb3d8412\") " pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.666282 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caa1db28-afc0-4abc-aa80-84cccb3d8412-config\") pod \"ovsdbserver-nb-0\" (UID: \"caa1db28-afc0-4abc-aa80-84cccb3d8412\") " pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.666353 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t79w\" (UniqueName: \"kubernetes.io/projected/caa1db28-afc0-4abc-aa80-84cccb3d8412-kube-api-access-6t79w\") pod \"ovsdbserver-nb-0\" (UID: \"caa1db28-afc0-4abc-aa80-84cccb3d8412\") " pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.666395 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"caa1db28-afc0-4abc-aa80-84cccb3d8412\") " pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.666415 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/caa1db28-afc0-4abc-aa80-84cccb3d8412-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"caa1db28-afc0-4abc-aa80-84cccb3d8412\") " pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.666516 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/caa1db28-afc0-4abc-aa80-84cccb3d8412-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"caa1db28-afc0-4abc-aa80-84cccb3d8412\") " pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.666635 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caa1db28-afc0-4abc-aa80-84cccb3d8412-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"caa1db28-afc0-4abc-aa80-84cccb3d8412\") " pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.666702 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/caa1db28-afc0-4abc-aa80-84cccb3d8412-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"caa1db28-afc0-4abc-aa80-84cccb3d8412\") " pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.667181 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"caa1db28-afc0-4abc-aa80-84cccb3d8412\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.667509 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/caa1db28-afc0-4abc-aa80-84cccb3d8412-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"caa1db28-afc0-4abc-aa80-84cccb3d8412\") " pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.668056 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caa1db28-afc0-4abc-aa80-84cccb3d8412-config\") pod \"ovsdbserver-nb-0\" (UID: \"caa1db28-afc0-4abc-aa80-84cccb3d8412\") " pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.668667 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/caa1db28-afc0-4abc-aa80-84cccb3d8412-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"caa1db28-afc0-4abc-aa80-84cccb3d8412\") " pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.672103 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caa1db28-afc0-4abc-aa80-84cccb3d8412-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"caa1db28-afc0-4abc-aa80-84cccb3d8412\") " pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.674831 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/caa1db28-afc0-4abc-aa80-84cccb3d8412-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"caa1db28-afc0-4abc-aa80-84cccb3d8412\") " pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.679843 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/caa1db28-afc0-4abc-aa80-84cccb3d8412-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"caa1db28-afc0-4abc-aa80-84cccb3d8412\") " pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.684155 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t79w\" (UniqueName: \"kubernetes.io/projected/caa1db28-afc0-4abc-aa80-84cccb3d8412-kube-api-access-6t79w\") pod \"ovsdbserver-nb-0\" (UID: \"caa1db28-afc0-4abc-aa80-84cccb3d8412\") " pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.694039 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"caa1db28-afc0-4abc-aa80-84cccb3d8412\") " pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.724977 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gtrnp"] Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.726986 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gtrnp" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.729709 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-dbx4g" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.729792 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.730099 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.747611 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-cdd7f"] Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.749261 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-cdd7f" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.767237 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gtrnp"] Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.784366 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-cdd7f"] Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.800378 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.870159 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35efb2c2-6521-4f6f-a350-a4dc537ecaf8-scripts\") pod \"ovn-controller-ovs-cdd7f\" (UID: \"35efb2c2-6521-4f6f-a350-a4dc537ecaf8\") " pod="openstack/ovn-controller-ovs-cdd7f" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.870233 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/31651551-edb9-4793-a752-39fa60a85ee3-ovn-controller-tls-certs\") pod \"ovn-controller-gtrnp\" (UID: \"31651551-edb9-4793-a752-39fa60a85ee3\") " pod="openstack/ovn-controller-gtrnp" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.870291 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/35efb2c2-6521-4f6f-a350-a4dc537ecaf8-var-log\") pod \"ovn-controller-ovs-cdd7f\" (UID: \"35efb2c2-6521-4f6f-a350-a4dc537ecaf8\") " pod="openstack/ovn-controller-ovs-cdd7f" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.870316 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31651551-edb9-4793-a752-39fa60a85ee3-var-log-ovn\") pod \"ovn-controller-gtrnp\" (UID: \"31651551-edb9-4793-a752-39fa60a85ee3\") " pod="openstack/ovn-controller-gtrnp" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.870346 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbsk7\" (UniqueName: \"kubernetes.io/projected/35efb2c2-6521-4f6f-a350-a4dc537ecaf8-kube-api-access-jbsk7\") pod \"ovn-controller-ovs-cdd7f\" (UID: \"35efb2c2-6521-4f6f-a350-a4dc537ecaf8\") " pod="openstack/ovn-controller-ovs-cdd7f" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.870576 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/35efb2c2-6521-4f6f-a350-a4dc537ecaf8-var-run\") pod \"ovn-controller-ovs-cdd7f\" (UID: \"35efb2c2-6521-4f6f-a350-a4dc537ecaf8\") " pod="openstack/ovn-controller-ovs-cdd7f" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.870632 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/35efb2c2-6521-4f6f-a350-a4dc537ecaf8-etc-ovs\") pod \"ovn-controller-ovs-cdd7f\" (UID: \"35efb2c2-6521-4f6f-a350-a4dc537ecaf8\") " pod="openstack/ovn-controller-ovs-cdd7f" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.870650 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31651551-edb9-4793-a752-39fa60a85ee3-combined-ca-bundle\") pod \"ovn-controller-gtrnp\" (UID: \"31651551-edb9-4793-a752-39fa60a85ee3\") " pod="openstack/ovn-controller-gtrnp" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.870689 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/35efb2c2-6521-4f6f-a350-a4dc537ecaf8-var-lib\") pod \"ovn-controller-ovs-cdd7f\" (UID: \"35efb2c2-6521-4f6f-a350-a4dc537ecaf8\") " pod="openstack/ovn-controller-ovs-cdd7f" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.870788 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31651551-edb9-4793-a752-39fa60a85ee3-var-run\") pod \"ovn-controller-gtrnp\" (UID: \"31651551-edb9-4793-a752-39fa60a85ee3\") " pod="openstack/ovn-controller-gtrnp" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.870868 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31651551-edb9-4793-a752-39fa60a85ee3-scripts\") pod \"ovn-controller-gtrnp\" (UID: \"31651551-edb9-4793-a752-39fa60a85ee3\") " pod="openstack/ovn-controller-gtrnp" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.871019 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfv4x\" (UniqueName: \"kubernetes.io/projected/31651551-edb9-4793-a752-39fa60a85ee3-kube-api-access-lfv4x\") pod \"ovn-controller-gtrnp\" (UID: \"31651551-edb9-4793-a752-39fa60a85ee3\") " pod="openstack/ovn-controller-gtrnp" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.871063 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31651551-edb9-4793-a752-39fa60a85ee3-var-run-ovn\") pod \"ovn-controller-gtrnp\" (UID: \"31651551-edb9-4793-a752-39fa60a85ee3\") " pod="openstack/ovn-controller-gtrnp" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.972716 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfv4x\" (UniqueName: \"kubernetes.io/projected/31651551-edb9-4793-a752-39fa60a85ee3-kube-api-access-lfv4x\") pod \"ovn-controller-gtrnp\" (UID: \"31651551-edb9-4793-a752-39fa60a85ee3\") " pod="openstack/ovn-controller-gtrnp" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.972801 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31651551-edb9-4793-a752-39fa60a85ee3-var-run-ovn\") pod \"ovn-controller-gtrnp\" (UID: \"31651551-edb9-4793-a752-39fa60a85ee3\") " pod="openstack/ovn-controller-gtrnp" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.972888 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35efb2c2-6521-4f6f-a350-a4dc537ecaf8-scripts\") pod \"ovn-controller-ovs-cdd7f\" (UID: \"35efb2c2-6521-4f6f-a350-a4dc537ecaf8\") " pod="openstack/ovn-controller-ovs-cdd7f" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.972955 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/31651551-edb9-4793-a752-39fa60a85ee3-ovn-controller-tls-certs\") pod \"ovn-controller-gtrnp\" (UID: \"31651551-edb9-4793-a752-39fa60a85ee3\") " pod="openstack/ovn-controller-gtrnp" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.973031 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/35efb2c2-6521-4f6f-a350-a4dc537ecaf8-var-log\") pod \"ovn-controller-ovs-cdd7f\" (UID: \"35efb2c2-6521-4f6f-a350-a4dc537ecaf8\") " pod="openstack/ovn-controller-ovs-cdd7f" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.973060 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31651551-edb9-4793-a752-39fa60a85ee3-var-log-ovn\") pod \"ovn-controller-gtrnp\" (UID: \"31651551-edb9-4793-a752-39fa60a85ee3\") " pod="openstack/ovn-controller-gtrnp" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.973134 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbsk7\" (UniqueName: \"kubernetes.io/projected/35efb2c2-6521-4f6f-a350-a4dc537ecaf8-kube-api-access-jbsk7\") pod \"ovn-controller-ovs-cdd7f\" (UID: \"35efb2c2-6521-4f6f-a350-a4dc537ecaf8\") " pod="openstack/ovn-controller-ovs-cdd7f" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.973233 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/35efb2c2-6521-4f6f-a350-a4dc537ecaf8-var-run\") pod \"ovn-controller-ovs-cdd7f\" (UID: \"35efb2c2-6521-4f6f-a350-a4dc537ecaf8\") " pod="openstack/ovn-controller-ovs-cdd7f" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.973311 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/35efb2c2-6521-4f6f-a350-a4dc537ecaf8-etc-ovs\") pod \"ovn-controller-ovs-cdd7f\" (UID: \"35efb2c2-6521-4f6f-a350-a4dc537ecaf8\") " pod="openstack/ovn-controller-ovs-cdd7f" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.973369 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31651551-edb9-4793-a752-39fa60a85ee3-combined-ca-bundle\") pod \"ovn-controller-gtrnp\" (UID: \"31651551-edb9-4793-a752-39fa60a85ee3\") " pod="openstack/ovn-controller-gtrnp" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.973409 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/35efb2c2-6521-4f6f-a350-a4dc537ecaf8-var-lib\") pod \"ovn-controller-ovs-cdd7f\" (UID: \"35efb2c2-6521-4f6f-a350-a4dc537ecaf8\") " pod="openstack/ovn-controller-ovs-cdd7f" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.973506 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31651551-edb9-4793-a752-39fa60a85ee3-var-run\") pod \"ovn-controller-gtrnp\" (UID: \"31651551-edb9-4793-a752-39fa60a85ee3\") " pod="openstack/ovn-controller-gtrnp" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.973594 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31651551-edb9-4793-a752-39fa60a85ee3-scripts\") pod \"ovn-controller-gtrnp\" (UID: \"31651551-edb9-4793-a752-39fa60a85ee3\") " pod="openstack/ovn-controller-gtrnp" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.974241 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/35efb2c2-6521-4f6f-a350-a4dc537ecaf8-var-log\") pod \"ovn-controller-ovs-cdd7f\" (UID: \"35efb2c2-6521-4f6f-a350-a4dc537ecaf8\") " pod="openstack/ovn-controller-ovs-cdd7f" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.974371 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31651551-edb9-4793-a752-39fa60a85ee3-var-run\") pod \"ovn-controller-gtrnp\" (UID: \"31651551-edb9-4793-a752-39fa60a85ee3\") " pod="openstack/ovn-controller-gtrnp" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.974391 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/35efb2c2-6521-4f6f-a350-a4dc537ecaf8-var-run\") pod \"ovn-controller-ovs-cdd7f\" (UID: \"35efb2c2-6521-4f6f-a350-a4dc537ecaf8\") " pod="openstack/ovn-controller-ovs-cdd7f" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.974420 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31651551-edb9-4793-a752-39fa60a85ee3-var-log-ovn\") pod \"ovn-controller-gtrnp\" (UID: \"31651551-edb9-4793-a752-39fa60a85ee3\") " pod="openstack/ovn-controller-gtrnp" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.974482 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/35efb2c2-6521-4f6f-a350-a4dc537ecaf8-var-lib\") pod \"ovn-controller-ovs-cdd7f\" (UID: \"35efb2c2-6521-4f6f-a350-a4dc537ecaf8\") " pod="openstack/ovn-controller-ovs-cdd7f" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.974484 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/35efb2c2-6521-4f6f-a350-a4dc537ecaf8-etc-ovs\") pod \"ovn-controller-ovs-cdd7f\" (UID: \"35efb2c2-6521-4f6f-a350-a4dc537ecaf8\") " pod="openstack/ovn-controller-ovs-cdd7f" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.974578 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31651551-edb9-4793-a752-39fa60a85ee3-var-run-ovn\") pod \"ovn-controller-gtrnp\" (UID: \"31651551-edb9-4793-a752-39fa60a85ee3\") " pod="openstack/ovn-controller-gtrnp" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.977151 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31651551-edb9-4793-a752-39fa60a85ee3-combined-ca-bundle\") pod \"ovn-controller-gtrnp\" (UID: \"31651551-edb9-4793-a752-39fa60a85ee3\") " pod="openstack/ovn-controller-gtrnp" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.977572 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35efb2c2-6521-4f6f-a350-a4dc537ecaf8-scripts\") pod \"ovn-controller-ovs-cdd7f\" (UID: \"35efb2c2-6521-4f6f-a350-a4dc537ecaf8\") " pod="openstack/ovn-controller-ovs-cdd7f" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.978194 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/31651551-edb9-4793-a752-39fa60a85ee3-ovn-controller-tls-certs\") pod \"ovn-controller-gtrnp\" (UID: \"31651551-edb9-4793-a752-39fa60a85ee3\") " pod="openstack/ovn-controller-gtrnp" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.979431 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31651551-edb9-4793-a752-39fa60a85ee3-scripts\") pod \"ovn-controller-gtrnp\" (UID: \"31651551-edb9-4793-a752-39fa60a85ee3\") " pod="openstack/ovn-controller-gtrnp" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.989108 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbsk7\" (UniqueName: \"kubernetes.io/projected/35efb2c2-6521-4f6f-a350-a4dc537ecaf8-kube-api-access-jbsk7\") pod \"ovn-controller-ovs-cdd7f\" (UID: \"35efb2c2-6521-4f6f-a350-a4dc537ecaf8\") " pod="openstack/ovn-controller-ovs-cdd7f" Mar 20 15:58:18 crc kubenswrapper[4730]: I0320 15:58:18.989860 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfv4x\" (UniqueName: \"kubernetes.io/projected/31651551-edb9-4793-a752-39fa60a85ee3-kube-api-access-lfv4x\") pod \"ovn-controller-gtrnp\" (UID: \"31651551-edb9-4793-a752-39fa60a85ee3\") " pod="openstack/ovn-controller-gtrnp" Mar 20 15:58:19 crc kubenswrapper[4730]: I0320 15:58:19.099660 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gtrnp" Mar 20 15:58:19 crc kubenswrapper[4730]: I0320 15:58:19.113096 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-cdd7f" Mar 20 15:58:21 crc kubenswrapper[4730]: E0320 15:58:21.466754 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Mar 20 15:58:21 crc kubenswrapper[4730]: E0320 15:58:21.467063 4730 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Mar 20 15:58:21 crc kubenswrapper[4730]: E0320 15:58:21.467185 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.147:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxh4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-bf56b5889-hwkdn_openstack(ae05e358-cb9f-4772-a644-8ec5131415eb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 15:58:21 crc kubenswrapper[4730]: E0320 15:58:21.469064 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-bf56b5889-hwkdn" podUID="ae05e358-cb9f-4772-a644-8ec5131415eb" Mar 20 15:58:21 crc kubenswrapper[4730]: E0320 15:58:21.489858 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Mar 20 15:58:21 crc kubenswrapper[4730]: E0320 15:58:21.489916 4730 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Mar 20 15:58:21 crc kubenswrapper[4730]: E0320 15:58:21.490043 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.147:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rqsxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-784b55c5d9-5wk8n_openstack(dcd9cd10-c13e-446b-9dad-8b30f04de37e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 15:58:21 crc kubenswrapper[4730]: E0320 15:58:21.491206 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-784b55c5d9-5wk8n" podUID="dcd9cd10-c13e-446b-9dad-8b30f04de37e" Mar 20 15:58:21 crc kubenswrapper[4730]: I0320 15:58:21.875784 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 20 15:58:21 crc kubenswrapper[4730]: I0320 15:58:21.877882 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:21 crc kubenswrapper[4730]: I0320 15:58:21.881059 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-dm82v" Mar 20 15:58:21 crc kubenswrapper[4730]: I0320 15:58:21.881652 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 20 15:58:21 crc kubenswrapper[4730]: I0320 15:58:21.881759 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 20 15:58:21 crc kubenswrapper[4730]: I0320 15:58:21.881848 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 20 15:58:21 crc kubenswrapper[4730]: I0320 15:58:21.882808 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.026059 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ba8c36f-1882-4bb3-bcb5-b3518ce35553-config\") pod \"ovsdbserver-sb-0\" (UID: \"1ba8c36f-1882-4bb3-bcb5-b3518ce35553\") " pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.026115 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ba8c36f-1882-4bb3-bcb5-b3518ce35553-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1ba8c36f-1882-4bb3-bcb5-b3518ce35553\") " pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.026214 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1ba8c36f-1882-4bb3-bcb5-b3518ce35553-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1ba8c36f-1882-4bb3-bcb5-b3518ce35553\") " pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.026717 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba8c36f-1882-4bb3-bcb5-b3518ce35553-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1ba8c36f-1882-4bb3-bcb5-b3518ce35553\") " pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.026770 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjq4h\" (UniqueName: \"kubernetes.io/projected/1ba8c36f-1882-4bb3-bcb5-b3518ce35553-kube-api-access-rjq4h\") pod \"ovsdbserver-sb-0\" (UID: \"1ba8c36f-1882-4bb3-bcb5-b3518ce35553\") " pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.026802 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ba8c36f-1882-4bb3-bcb5-b3518ce35553-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1ba8c36f-1882-4bb3-bcb5-b3518ce35553\") " pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.026833 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ba8c36f-1882-4bb3-bcb5-b3518ce35553-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1ba8c36f-1882-4bb3-bcb5-b3518ce35553\") " pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.026891 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1ba8c36f-1882-4bb3-bcb5-b3518ce35553\") " pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.128301 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ba8c36f-1882-4bb3-bcb5-b3518ce35553-config\") pod \"ovsdbserver-sb-0\" (UID: \"1ba8c36f-1882-4bb3-bcb5-b3518ce35553\") " pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.128374 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ba8c36f-1882-4bb3-bcb5-b3518ce35553-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1ba8c36f-1882-4bb3-bcb5-b3518ce35553\") " pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.128423 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1ba8c36f-1882-4bb3-bcb5-b3518ce35553-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1ba8c36f-1882-4bb3-bcb5-b3518ce35553\") " pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.128445 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba8c36f-1882-4bb3-bcb5-b3518ce35553-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1ba8c36f-1882-4bb3-bcb5-b3518ce35553\") " pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.128494 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjq4h\" (UniqueName: \"kubernetes.io/projected/1ba8c36f-1882-4bb3-bcb5-b3518ce35553-kube-api-access-rjq4h\") pod \"ovsdbserver-sb-0\" (UID: \"1ba8c36f-1882-4bb3-bcb5-b3518ce35553\") " pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.128532 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ba8c36f-1882-4bb3-bcb5-b3518ce35553-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1ba8c36f-1882-4bb3-bcb5-b3518ce35553\") " pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.128569 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ba8c36f-1882-4bb3-bcb5-b3518ce35553-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1ba8c36f-1882-4bb3-bcb5-b3518ce35553\") " pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.128600 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1ba8c36f-1882-4bb3-bcb5-b3518ce35553\") " pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.128943 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1ba8c36f-1882-4bb3-bcb5-b3518ce35553\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.131363 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ba8c36f-1882-4bb3-bcb5-b3518ce35553-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1ba8c36f-1882-4bb3-bcb5-b3518ce35553\") " pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.132341 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ba8c36f-1882-4bb3-bcb5-b3518ce35553-config\") pod \"ovsdbserver-sb-0\" (UID: \"1ba8c36f-1882-4bb3-bcb5-b3518ce35553\") " pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.133267 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1ba8c36f-1882-4bb3-bcb5-b3518ce35553-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1ba8c36f-1882-4bb3-bcb5-b3518ce35553\") " pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.152963 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba8c36f-1882-4bb3-bcb5-b3518ce35553-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1ba8c36f-1882-4bb3-bcb5-b3518ce35553\") " pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.153559 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ba8c36f-1882-4bb3-bcb5-b3518ce35553-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1ba8c36f-1882-4bb3-bcb5-b3518ce35553\") " pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.154092 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ba8c36f-1882-4bb3-bcb5-b3518ce35553-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1ba8c36f-1882-4bb3-bcb5-b3518ce35553\") " pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.157196 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjq4h\" (UniqueName: \"kubernetes.io/projected/1ba8c36f-1882-4bb3-bcb5-b3518ce35553-kube-api-access-rjq4h\") pod \"ovsdbserver-sb-0\" (UID: \"1ba8c36f-1882-4bb3-bcb5-b3518ce35553\") " pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.163428 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1ba8c36f-1882-4bb3-bcb5-b3518ce35553\") " pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.190518 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74bcc47849-2r2xb"] Mar 20 15:58:22 crc kubenswrapper[4730]: W0320 15:58:22.197088 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6abf778f_200f_4d48_97b6_08a638b4efa2.slice/crio-2fc583ef7f9e5c815bdce66332204108577039d4d2c809ecc6f6da695232596d WatchSource:0}: Error finding container 2fc583ef7f9e5c815bdce66332204108577039d4d2c809ecc6f6da695232596d: Status 404 returned error can't find the container with id 2fc583ef7f9e5c815bdce66332204108577039d4d2c809ecc6f6da695232596d Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.198406 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 20 15:58:22 crc kubenswrapper[4730]: W0320 15:58:22.202561 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8043f69c_832c_4afa_a9b9_211507664805.slice/crio-8412f9f53f9d92b76fea1c48228bf1b9d922d18161acc37ea8504ee75f4ce219 WatchSource:0}: Error finding container 8412f9f53f9d92b76fea1c48228bf1b9d922d18161acc37ea8504ee75f4ce219: Status 404 returned error can't find the container with id 8412f9f53f9d92b76fea1c48228bf1b9d922d18161acc37ea8504ee75f4ce219 Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.205127 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.210096 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.367690 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bf56b5889-hwkdn" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.373657 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784b55c5d9-5wk8n" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.435052 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae05e358-cb9f-4772-a644-8ec5131415eb-dns-svc\") pod \"ae05e358-cb9f-4772-a644-8ec5131415eb\" (UID: \"ae05e358-cb9f-4772-a644-8ec5131415eb\") " Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.435448 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxh4d\" (UniqueName: \"kubernetes.io/projected/ae05e358-cb9f-4772-a644-8ec5131415eb-kube-api-access-hxh4d\") pod \"ae05e358-cb9f-4772-a644-8ec5131415eb\" (UID: \"ae05e358-cb9f-4772-a644-8ec5131415eb\") " Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.435508 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae05e358-cb9f-4772-a644-8ec5131415eb-config\") pod \"ae05e358-cb9f-4772-a644-8ec5131415eb\" (UID: \"ae05e358-cb9f-4772-a644-8ec5131415eb\") " Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.435655 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae05e358-cb9f-4772-a644-8ec5131415eb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ae05e358-cb9f-4772-a644-8ec5131415eb" (UID: "ae05e358-cb9f-4772-a644-8ec5131415eb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.435866 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae05e358-cb9f-4772-a644-8ec5131415eb-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.436179 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae05e358-cb9f-4772-a644-8ec5131415eb-config" (OuterVolumeSpecName: "config") pod "ae05e358-cb9f-4772-a644-8ec5131415eb" (UID: "ae05e358-cb9f-4772-a644-8ec5131415eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.441630 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae05e358-cb9f-4772-a644-8ec5131415eb-kube-api-access-hxh4d" (OuterVolumeSpecName: "kube-api-access-hxh4d") pod "ae05e358-cb9f-4772-a644-8ec5131415eb" (UID: "ae05e358-cb9f-4772-a644-8ec5131415eb"). InnerVolumeSpecName "kube-api-access-hxh4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.537367 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqsxv\" (UniqueName: \"kubernetes.io/projected/dcd9cd10-c13e-446b-9dad-8b30f04de37e-kube-api-access-rqsxv\") pod \"dcd9cd10-c13e-446b-9dad-8b30f04de37e\" (UID: \"dcd9cd10-c13e-446b-9dad-8b30f04de37e\") " Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.537498 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd9cd10-c13e-446b-9dad-8b30f04de37e-config\") pod \"dcd9cd10-c13e-446b-9dad-8b30f04de37e\" (UID: \"dcd9cd10-c13e-446b-9dad-8b30f04de37e\") " Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.537920 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxh4d\" (UniqueName: \"kubernetes.io/projected/ae05e358-cb9f-4772-a644-8ec5131415eb-kube-api-access-hxh4d\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.537933 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae05e358-cb9f-4772-a644-8ec5131415eb-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.541395 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd9cd10-c13e-446b-9dad-8b30f04de37e-config" (OuterVolumeSpecName: "config") pod "dcd9cd10-c13e-446b-9dad-8b30f04de37e" (UID: "dcd9cd10-c13e-446b-9dad-8b30f04de37e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.546545 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd9cd10-c13e-446b-9dad-8b30f04de37e-kube-api-access-rqsxv" (OuterVolumeSpecName: "kube-api-access-rqsxv") pod "dcd9cd10-c13e-446b-9dad-8b30f04de37e" (UID: "dcd9cd10-c13e-446b-9dad-8b30f04de37e"). InnerVolumeSpecName "kube-api-access-rqsxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.563194 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d64f5f8f-ncpth"] Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.651483 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqsxv\" (UniqueName: \"kubernetes.io/projected/dcd9cd10-c13e-446b-9dad-8b30f04de37e-kube-api-access-rqsxv\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.651516 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd9cd10-c13e-446b-9dad-8b30f04de37e-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.726505 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Mar 20 15:58:22 crc kubenswrapper[4730]: W0320 15:58:22.744850 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf9ca02d_e20f_4f55_ba14_92b91812afb6.slice/crio-e318018f5b6c6fc0655140eb9d79b013847e5addc578d01fc6f2fbfd40ddada2 WatchSource:0}: Error finding container e318018f5b6c6fc0655140eb9d79b013847e5addc578d01fc6f2fbfd40ddada2: Status 404 returned error can't find the container with id e318018f5b6c6fc0655140eb9d79b013847e5addc578d01fc6f2fbfd40ddada2 Mar 20 15:58:22 crc kubenswrapper[4730]: W0320 15:58:22.749728 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27dbbb52_2bd1_4e24_b621_128e7c880a2b.slice/crio-a15e1598ce7b71e13d66b76d6a152b790714647ed13bd032c186f428dbc12d19 WatchSource:0}: Error finding container a15e1598ce7b71e13d66b76d6a152b790714647ed13bd032c186f428dbc12d19: Status 404 returned error can't find the container with id a15e1598ce7b71e13d66b76d6a152b790714647ed13bd032c186f428dbc12d19 Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.751341 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7847d45595-fnchx"] Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.766884 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.779821 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.868158 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"df9ca02d-e20f-4f55-ba14-92b91812afb6","Type":"ContainerStarted","Data":"e318018f5b6c6fc0655140eb9d79b013847e5addc578d01fc6f2fbfd40ddada2"} Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.869543 4730 generic.go:334] "Generic (PLEG): container finished" podID="a5e88dae-c3fd-456c-92c6-3bc143b5a399" containerID="9eec9013a24b2740a1eda4c33bad2a0fe15b131b43355b27b55b42be661249e4" exitCode=0 Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.869584 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74bcc47849-2r2xb" event={"ID":"a5e88dae-c3fd-456c-92c6-3bc143b5a399","Type":"ContainerDied","Data":"9eec9013a24b2740a1eda4c33bad2a0fe15b131b43355b27b55b42be661249e4"} Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.869599 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74bcc47849-2r2xb" event={"ID":"a5e88dae-c3fd-456c-92c6-3bc143b5a399","Type":"ContainerStarted","Data":"179fd0dd8f56e799ace1a1c345f43d1901fddb9fcf8b79a809a5d214fa5edf4a"} Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.872978 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784b55c5d9-5wk8n" event={"ID":"dcd9cd10-c13e-446b-9dad-8b30f04de37e","Type":"ContainerDied","Data":"cc74e55810f8f5b64f9744e711295d16d00bc26b41f45c20f60b7913df187ae8"} Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.873022 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784b55c5d9-5wk8n" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.874008 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3","Type":"ContainerStarted","Data":"ddda948591892d2a138c1c22bc7cf5e93ad382615cc9ff618b810cf784bacaf9"} Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.875013 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"84bbdebb-43de-41d6-82d4-71b0948c25f8","Type":"ContainerStarted","Data":"54a8e4515bfc47a9d24eedccdb8072950e20a0d4b713e2481491bcc7d18538fc"} Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.875854 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8043f69c-832c-4afa-a9b9-211507664805","Type":"ContainerStarted","Data":"8412f9f53f9d92b76fea1c48228bf1b9d922d18161acc37ea8504ee75f4ce219"} Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.876934 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6abf778f-200f-4d48-97b6-08a638b4efa2","Type":"ContainerStarted","Data":"2fc583ef7f9e5c815bdce66332204108577039d4d2c809ecc6f6da695232596d"} Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.880021 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerStarted","Data":"2aab75ddb2e10e731a7d582f69fae06a40e7e5a6270ff47496bdac5fb9c6ebfd"} Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.881981 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bf56b5889-hwkdn" event={"ID":"ae05e358-cb9f-4772-a644-8ec5131415eb","Type":"ContainerDied","Data":"0f15db04a290faf8bce634d61170874353004570e3569dc452521a6a7ac01ec4"} Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.882083 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bf56b5889-hwkdn" Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.895158 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7847d45595-fnchx" event={"ID":"27dbbb52-2bd1-4e24-b621-128e7c880a2b","Type":"ContainerStarted","Data":"a15e1598ce7b71e13d66b76d6a152b790714647ed13bd032c186f428dbc12d19"} Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.896821 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d64f5f8f-ncpth" event={"ID":"7a5c1062-c366-4407-a395-cc3ad80ed296","Type":"ContainerStarted","Data":"180b2223e3765d491bb6fc4c8ff950162b5ee560c2cb4a3e29faf9b39fae29c7"} Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.896858 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d64f5f8f-ncpth" event={"ID":"7a5c1062-c366-4407-a395-cc3ad80ed296","Type":"ContainerStarted","Data":"8379bbe1dd70d1272baf32264af24e932d0f78e8da002ecb096ba75a065d1af3"} Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.976178 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-784b55c5d9-5wk8n"] Mar 20 15:58:22 crc kubenswrapper[4730]: I0320 15:58:22.984956 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-784b55c5d9-5wk8n"] Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.003109 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bf56b5889-hwkdn"] Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.029124 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bf56b5889-hwkdn"] Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.031067 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.037040 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.046989 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gtrnp"] Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.058670 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.150327 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-cdd7f"] Mar 20 15:58:23 crc kubenswrapper[4730]: W0320 15:58:23.188868 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35efb2c2_6521_4f6f_a350_a4dc537ecaf8.slice/crio-fd022285e838760cd95fe62d2e673a3f95b956772711184e1cf9b46740dcff5b WatchSource:0}: Error finding container fd022285e838760cd95fe62d2e673a3f95b956772711184e1cf9b46740dcff5b: Status 404 returned error can't find the container with id fd022285e838760cd95fe62d2e673a3f95b956772711184e1cf9b46740dcff5b Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.245635 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.300197 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d64f5f8f-ncpth" Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.464187 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrdjk\" (UniqueName: \"kubernetes.io/projected/7a5c1062-c366-4407-a395-cc3ad80ed296-kube-api-access-mrdjk\") pod \"7a5c1062-c366-4407-a395-cc3ad80ed296\" (UID: \"7a5c1062-c366-4407-a395-cc3ad80ed296\") " Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.464311 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a5c1062-c366-4407-a395-cc3ad80ed296-dns-svc\") pod \"7a5c1062-c366-4407-a395-cc3ad80ed296\" (UID: \"7a5c1062-c366-4407-a395-cc3ad80ed296\") " Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.464366 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a5c1062-c366-4407-a395-cc3ad80ed296-config\") pod \"7a5c1062-c366-4407-a395-cc3ad80ed296\" (UID: \"7a5c1062-c366-4407-a395-cc3ad80ed296\") " Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.470810 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a5c1062-c366-4407-a395-cc3ad80ed296-kube-api-access-mrdjk" (OuterVolumeSpecName: "kube-api-access-mrdjk") pod "7a5c1062-c366-4407-a395-cc3ad80ed296" (UID: "7a5c1062-c366-4407-a395-cc3ad80ed296"). InnerVolumeSpecName "kube-api-access-mrdjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.486080 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a5c1062-c366-4407-a395-cc3ad80ed296-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7a5c1062-c366-4407-a395-cc3ad80ed296" (UID: "7a5c1062-c366-4407-a395-cc3ad80ed296"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.495319 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a5c1062-c366-4407-a395-cc3ad80ed296-config" (OuterVolumeSpecName: "config") pod "7a5c1062-c366-4407-a395-cc3ad80ed296" (UID: "7a5c1062-c366-4407-a395-cc3ad80ed296"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.548343 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae05e358-cb9f-4772-a644-8ec5131415eb" path="/var/lib/kubelet/pods/ae05e358-cb9f-4772-a644-8ec5131415eb/volumes" Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.548799 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd9cd10-c13e-446b-9dad-8b30f04de37e" path="/var/lib/kubelet/pods/dcd9cd10-c13e-446b-9dad-8b30f04de37e/volumes" Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.567572 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrdjk\" (UniqueName: \"kubernetes.io/projected/7a5c1062-c366-4407-a395-cc3ad80ed296-kube-api-access-mrdjk\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.567600 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a5c1062-c366-4407-a395-cc3ad80ed296-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.567609 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a5c1062-c366-4407-a395-cc3ad80ed296-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.908964 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gtrnp" event={"ID":"31651551-edb9-4793-a752-39fa60a85ee3","Type":"ContainerStarted","Data":"1d157ee12a3f603de7bc92a403912ba9effcb82755fea58f5d2d2033bc139fb0"} Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.910435 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"899bd9ae-9354-4e70-ad37-b438a5a33a24","Type":"ContainerStarted","Data":"88944d155607449a1db05424d7fa831eefbbe52c1129e2a15b4531a9f20134db"} Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.912962 4730 generic.go:334] "Generic (PLEG): container finished" podID="27dbbb52-2bd1-4e24-b621-128e7c880a2b" containerID="809194a2c64396f3bfaa32147cdb80cee474a4897b6ae0caac1fba9c383996f4" exitCode=0 Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.913173 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7847d45595-fnchx" event={"ID":"27dbbb52-2bd1-4e24-b621-128e7c880a2b","Type":"ContainerDied","Data":"809194a2c64396f3bfaa32147cdb80cee474a4897b6ae0caac1fba9c383996f4"} Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.916102 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4938ac0e-1226-4f20-8f23-763b62b863c4","Type":"ContainerStarted","Data":"7d7a0da71b781876c622a4a6bd339b425ce64e7ebb40d9b345bbc6e39de452eb"} Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.919616 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74bcc47849-2r2xb" event={"ID":"a5e88dae-c3fd-456c-92c6-3bc143b5a399","Type":"ContainerStarted","Data":"15b7107144e36c5c008efc65f73b4c3e5a1b51a4b8a473402aba592b6c7329e1"} Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.920001 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74bcc47849-2r2xb" Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.923188 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1ba8c36f-1882-4bb3-bcb5-b3518ce35553","Type":"ContainerStarted","Data":"806def77f051b27a1eb53a83708824c91f1e36524e01a36d9fa7f6e9f8168f4e"} Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.924905 4730 generic.go:334] "Generic (PLEG): container finished" podID="7a5c1062-c366-4407-a395-cc3ad80ed296" containerID="180b2223e3765d491bb6fc4c8ff950162b5ee560c2cb4a3e29faf9b39fae29c7" exitCode=0 Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.925051 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d64f5f8f-ncpth" event={"ID":"7a5c1062-c366-4407-a395-cc3ad80ed296","Type":"ContainerDied","Data":"180b2223e3765d491bb6fc4c8ff950162b5ee560c2cb4a3e29faf9b39fae29c7"} Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.925086 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d64f5f8f-ncpth" event={"ID":"7a5c1062-c366-4407-a395-cc3ad80ed296","Type":"ContainerDied","Data":"8379bbe1dd70d1272baf32264af24e932d0f78e8da002ecb096ba75a065d1af3"} Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.925151 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d64f5f8f-ncpth" Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.925165 4730 scope.go:117] "RemoveContainer" containerID="180b2223e3765d491bb6fc4c8ff950162b5ee560c2cb4a3e29faf9b39fae29c7" Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.926981 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cdd7f" event={"ID":"35efb2c2-6521-4f6f-a350-a4dc537ecaf8","Type":"ContainerStarted","Data":"fd022285e838760cd95fe62d2e673a3f95b956772711184e1cf9b46740dcff5b"} Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.937086 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdd3845a-3723-438f-aa58-606451baed6c","Type":"ContainerStarted","Data":"1e180be81f67edf4314b8f53e9215c19609d113f31f1dc11bb8e1b622d9ae961"} Mar 20 15:58:23 crc kubenswrapper[4730]: I0320 15:58:23.963718 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74bcc47849-2r2xb" podStartSLOduration=15.844895156 podStartE2EDuration="15.963696031s" podCreationTimestamp="2026-03-20 15:58:08 +0000 UTC" firstStartedPulling="2026-03-20 15:58:22.194025212 +0000 UTC m=+1161.407396581" lastFinishedPulling="2026-03-20 15:58:22.312826087 +0000 UTC m=+1161.526197456" observedRunningTime="2026-03-20 15:58:23.957601628 +0000 UTC m=+1163.170972997" watchObservedRunningTime="2026-03-20 15:58:23.963696031 +0000 UTC m=+1163.177067400" Mar 20 15:58:24 crc kubenswrapper[4730]: I0320 15:58:24.032620 4730 scope.go:117] "RemoveContainer" containerID="180b2223e3765d491bb6fc4c8ff950162b5ee560c2cb4a3e29faf9b39fae29c7" Mar 20 15:58:24 crc kubenswrapper[4730]: E0320 15:58:24.037060 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"180b2223e3765d491bb6fc4c8ff950162b5ee560c2cb4a3e29faf9b39fae29c7\": container with ID starting with 180b2223e3765d491bb6fc4c8ff950162b5ee560c2cb4a3e29faf9b39fae29c7 not found: ID does not exist" containerID="180b2223e3765d491bb6fc4c8ff950162b5ee560c2cb4a3e29faf9b39fae29c7" Mar 20 15:58:24 crc kubenswrapper[4730]: I0320 15:58:24.037100 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"180b2223e3765d491bb6fc4c8ff950162b5ee560c2cb4a3e29faf9b39fae29c7"} err="failed to get container status \"180b2223e3765d491bb6fc4c8ff950162b5ee560c2cb4a3e29faf9b39fae29c7\": rpc error: code = NotFound desc = could not find container \"180b2223e3765d491bb6fc4c8ff950162b5ee560c2cb4a3e29faf9b39fae29c7\": container with ID starting with 180b2223e3765d491bb6fc4c8ff950162b5ee560c2cb4a3e29faf9b39fae29c7 not found: ID does not exist" Mar 20 15:58:24 crc kubenswrapper[4730]: I0320 15:58:24.042383 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d64f5f8f-ncpth"] Mar 20 15:58:24 crc kubenswrapper[4730]: I0320 15:58:24.049563 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68d64f5f8f-ncpth"] Mar 20 15:58:24 crc kubenswrapper[4730]: E0320 15:58:24.140374 4730 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Mar 20 15:58:24 crc kubenswrapper[4730]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/27dbbb52-2bd1-4e24-b621-128e7c880a2b/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Mar 20 15:58:24 crc kubenswrapper[4730]: > podSandboxID="a15e1598ce7b71e13d66b76d6a152b790714647ed13bd032c186f428dbc12d19" Mar 20 15:58:24 crc kubenswrapper[4730]: E0320 15:58:24.140645 4730 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 15:58:24 crc kubenswrapper[4730]: container &Container{Name:dnsmasq-dns,Image:38.102.83.147:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n658h5c5h88h68dhb6h57dhd4h697hb8h8fh74hb7h54fh54dh548h7h55dhb8h9fh55dh688h5bbh5d5h675h669hb7h67hbbhffh668h5c7hc5q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dgkz6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7847d45595-fnchx_openstack(27dbbb52-2bd1-4e24-b621-128e7c880a2b): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/27dbbb52-2bd1-4e24-b621-128e7c880a2b/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Mar 20 15:58:24 crc kubenswrapper[4730]: > logger="UnhandledError" Mar 20 15:58:24 crc kubenswrapper[4730]: E0320 15:58:24.141903 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/27dbbb52-2bd1-4e24-b621-128e7c880a2b/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-7847d45595-fnchx" podUID="27dbbb52-2bd1-4e24-b621-128e7c880a2b" Mar 20 15:58:24 crc kubenswrapper[4730]: I0320 15:58:24.189606 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 20 15:58:24 crc kubenswrapper[4730]: W0320 15:58:24.194279 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcaa1db28_afc0_4abc_aa80_84cccb3d8412.slice/crio-6feb9c0bdd6c4d26ab57540b5fa6559ab43da600a84557c93e7717682e410ad7 WatchSource:0}: Error finding container 6feb9c0bdd6c4d26ab57540b5fa6559ab43da600a84557c93e7717682e410ad7: Status 404 returned error can't find the container with id 6feb9c0bdd6c4d26ab57540b5fa6559ab43da600a84557c93e7717682e410ad7 Mar 20 15:58:24 crc kubenswrapper[4730]: I0320 15:58:24.945270 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"caa1db28-afc0-4abc-aa80-84cccb3d8412","Type":"ContainerStarted","Data":"6feb9c0bdd6c4d26ab57540b5fa6559ab43da600a84557c93e7717682e410ad7"} Mar 20 15:58:25 crc kubenswrapper[4730]: I0320 15:58:25.545046 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a5c1062-c366-4407-a395-cc3ad80ed296" path="/var/lib/kubelet/pods/7a5c1062-c366-4407-a395-cc3ad80ed296/volumes" Mar 20 15:58:25 crc kubenswrapper[4730]: I0320 15:58:25.953058 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7847d45595-fnchx" event={"ID":"27dbbb52-2bd1-4e24-b621-128e7c880a2b","Type":"ContainerStarted","Data":"10de99a85cf21dfd026eabd639047a86231ef84f5643c16ab2ab663cf70834f5"} Mar 20 15:58:25 crc kubenswrapper[4730]: I0320 15:58:25.953585 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7847d45595-fnchx" Mar 20 15:58:25 crc kubenswrapper[4730]: I0320 15:58:25.972448 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7847d45595-fnchx" podStartSLOduration=18.972428983 podStartE2EDuration="18.972428983s" podCreationTimestamp="2026-03-20 15:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:58:25.968412789 +0000 UTC m=+1165.181784168" watchObservedRunningTime="2026-03-20 15:58:25.972428983 +0000 UTC m=+1165.185800352" Mar 20 15:58:28 crc kubenswrapper[4730]: I0320 15:58:28.594291 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74bcc47849-2r2xb" Mar 20 15:58:28 crc kubenswrapper[4730]: I0320 15:58:28.655640 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7847d45595-fnchx"] Mar 20 15:58:28 crc kubenswrapper[4730]: I0320 15:58:28.656185 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7847d45595-fnchx" podUID="27dbbb52-2bd1-4e24-b621-128e7c880a2b" containerName="dnsmasq-dns" containerID="cri-o://10de99a85cf21dfd026eabd639047a86231ef84f5643c16ab2ab663cf70834f5" gracePeriod=10 Mar 20 15:58:28 crc kubenswrapper[4730]: I0320 15:58:28.973569 4730 generic.go:334] "Generic (PLEG): container finished" podID="27dbbb52-2bd1-4e24-b621-128e7c880a2b" containerID="10de99a85cf21dfd026eabd639047a86231ef84f5643c16ab2ab663cf70834f5" exitCode=0 Mar 20 15:58:28 crc kubenswrapper[4730]: I0320 15:58:28.973639 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7847d45595-fnchx" event={"ID":"27dbbb52-2bd1-4e24-b621-128e7c880a2b","Type":"ContainerDied","Data":"10de99a85cf21dfd026eabd639047a86231ef84f5643c16ab2ab663cf70834f5"} Mar 20 15:58:33 crc kubenswrapper[4730]: I0320 15:58:33.338718 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7847d45595-fnchx" podUID="27dbbb52-2bd1-4e24-b621-128e7c880a2b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.107:5353: connect: connection refused" Mar 20 15:58:34 crc kubenswrapper[4730]: E0320 15:58:34.986185 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-memcached:watcher_latest" Mar 20 15:58:34 crc kubenswrapper[4730]: E0320 15:58:34.986245 4730 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-memcached:watcher_latest" Mar 20 15:58:34 crc kubenswrapper[4730]: E0320 15:58:34.986439 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:38.102.83.147:5001/podified-master-centos10/openstack-memcached:watcher_latest,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n687h645h66ch5bch5f7hf6h54fh689h67bhcchd5h688h5d9h5c4h5fh6bhd9h545hb8hd4h545hcfh5f4h649hd5h644h57dh78h5fch695h684h684q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vggm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(84bbdebb-43de-41d6-82d4-71b0948c25f8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 15:58:34 crc kubenswrapper[4730]: E0320 15:58:34.987682 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="84bbdebb-43de-41d6-82d4-71b0948c25f8" Mar 20 15:58:35 crc kubenswrapper[4730]: E0320 15:58:35.030685 4730 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-ovn-base:watcher_latest" Mar 20 15:58:35 crc kubenswrapper[4730]: E0320 15:58:35.030798 4730 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/podified-master-centos10/openstack-ovn-base:watcher_latest" Mar 20 15:58:35 crc kubenswrapper[4730]: E0320 15:58:35.031025 4730 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:38.102.83.147:5001/podified-master-centos10/openstack-ovn-base:watcher_latest,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n546h568h5dch64fh59dh64h594h59fh85h578hb7h599h66fhd5h5d4h5fh57fhbh5bch68bh5b8h5ffhcch9fh548h594h578h65h594h558hf4h5c8q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbsk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-cdd7f_openstack(35efb2c2-6521-4f6f-a350-a4dc537ecaf8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 15:58:35 crc kubenswrapper[4730]: E0320 15:58:35.032595 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-cdd7f" podUID="35efb2c2-6521-4f6f-a350-a4dc537ecaf8" Mar 20 15:58:35 crc kubenswrapper[4730]: E0320 15:58:35.065289 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.147:5001/podified-master-centos10/openstack-memcached:watcher_latest\\\"\"" pod="openstack/memcached-0" podUID="84bbdebb-43de-41d6-82d4-71b0948c25f8" Mar 20 15:58:36 crc kubenswrapper[4730]: I0320 15:58:36.033992 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7847d45595-fnchx" Mar 20 15:58:36 crc kubenswrapper[4730]: I0320 15:58:36.077303 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7847d45595-fnchx" Mar 20 15:58:36 crc kubenswrapper[4730]: I0320 15:58:36.077438 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7847d45595-fnchx" event={"ID":"27dbbb52-2bd1-4e24-b621-128e7c880a2b","Type":"ContainerDied","Data":"a15e1598ce7b71e13d66b76d6a152b790714647ed13bd032c186f428dbc12d19"} Mar 20 15:58:36 crc kubenswrapper[4730]: I0320 15:58:36.077477 4730 scope.go:117] "RemoveContainer" containerID="10de99a85cf21dfd026eabd639047a86231ef84f5643c16ab2ab663cf70834f5" Mar 20 15:58:36 crc kubenswrapper[4730]: E0320 15:58:36.078692 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.147:5001/podified-master-centos10/openstack-ovn-base:watcher_latest\\\"\"" pod="openstack/ovn-controller-ovs-cdd7f" podUID="35efb2c2-6521-4f6f-a350-a4dc537ecaf8" Mar 20 15:58:36 crc kubenswrapper[4730]: I0320 15:58:36.178772 4730 scope.go:117] "RemoveContainer" containerID="809194a2c64396f3bfaa32147cdb80cee474a4897b6ae0caac1fba9c383996f4" Mar 20 15:58:36 crc kubenswrapper[4730]: I0320 15:58:36.215391 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27dbbb52-2bd1-4e24-b621-128e7c880a2b-config\") pod \"27dbbb52-2bd1-4e24-b621-128e7c880a2b\" (UID: \"27dbbb52-2bd1-4e24-b621-128e7c880a2b\") " Mar 20 15:58:36 crc kubenswrapper[4730]: I0320 15:58:36.215554 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27dbbb52-2bd1-4e24-b621-128e7c880a2b-dns-svc\") pod \"27dbbb52-2bd1-4e24-b621-128e7c880a2b\" (UID: \"27dbbb52-2bd1-4e24-b621-128e7c880a2b\") " Mar 20 15:58:36 crc kubenswrapper[4730]: I0320 15:58:36.215612 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgkz6\" (UniqueName: \"kubernetes.io/projected/27dbbb52-2bd1-4e24-b621-128e7c880a2b-kube-api-access-dgkz6\") pod \"27dbbb52-2bd1-4e24-b621-128e7c880a2b\" (UID: \"27dbbb52-2bd1-4e24-b621-128e7c880a2b\") " Mar 20 15:58:36 crc kubenswrapper[4730]: I0320 15:58:36.219331 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27dbbb52-2bd1-4e24-b621-128e7c880a2b-kube-api-access-dgkz6" (OuterVolumeSpecName: "kube-api-access-dgkz6") pod "27dbbb52-2bd1-4e24-b621-128e7c880a2b" (UID: "27dbbb52-2bd1-4e24-b621-128e7c880a2b"). InnerVolumeSpecName "kube-api-access-dgkz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:58:36 crc kubenswrapper[4730]: I0320 15:58:36.253502 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27dbbb52-2bd1-4e24-b621-128e7c880a2b-config" (OuterVolumeSpecName: "config") pod "27dbbb52-2bd1-4e24-b621-128e7c880a2b" (UID: "27dbbb52-2bd1-4e24-b621-128e7c880a2b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:58:36 crc kubenswrapper[4730]: I0320 15:58:36.254075 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27dbbb52-2bd1-4e24-b621-128e7c880a2b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "27dbbb52-2bd1-4e24-b621-128e7c880a2b" (UID: "27dbbb52-2bd1-4e24-b621-128e7c880a2b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:58:36 crc kubenswrapper[4730]: I0320 15:58:36.318430 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgkz6\" (UniqueName: \"kubernetes.io/projected/27dbbb52-2bd1-4e24-b621-128e7c880a2b-kube-api-access-dgkz6\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:36 crc kubenswrapper[4730]: I0320 15:58:36.318459 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27dbbb52-2bd1-4e24-b621-128e7c880a2b-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:36 crc kubenswrapper[4730]: I0320 15:58:36.318468 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27dbbb52-2bd1-4e24-b621-128e7c880a2b-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:36 crc kubenswrapper[4730]: I0320 15:58:36.413857 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7847d45595-fnchx"] Mar 20 15:58:36 crc kubenswrapper[4730]: I0320 15:58:36.421622 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7847d45595-fnchx"] Mar 20 15:58:37 crc kubenswrapper[4730]: I0320 15:58:37.561727 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27dbbb52-2bd1-4e24-b621-128e7c880a2b" path="/var/lib/kubelet/pods/27dbbb52-2bd1-4e24-b621-128e7c880a2b/volumes" Mar 20 15:58:38 crc kubenswrapper[4730]: I0320 15:58:38.097238 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6abf778f-200f-4d48-97b6-08a638b4efa2","Type":"ContainerStarted","Data":"dc0fdc1377be29d3b722fe6d6bb77fb846f03d0f1df47f3a8d806d50aa0f28ac"} Mar 20 15:58:38 crc kubenswrapper[4730]: I0320 15:58:38.101536 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gtrnp" event={"ID":"31651551-edb9-4793-a752-39fa60a85ee3","Type":"ContainerStarted","Data":"673a614f86cc9d53b20490455a209d43255459235454710313658c86d4b4fa1d"} Mar 20 15:58:38 crc kubenswrapper[4730]: I0320 15:58:38.101896 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-gtrnp" Mar 20 15:58:38 crc kubenswrapper[4730]: I0320 15:58:38.104413 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"899bd9ae-9354-4e70-ad37-b438a5a33a24","Type":"ContainerStarted","Data":"e083a94b7b0bad04baccc30cbbd4595d9bfb295fd7de2e8fc839c48d6d9ed2c7"} Mar 20 15:58:38 crc kubenswrapper[4730]: I0320 15:58:38.181161 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-gtrnp" podStartSLOduration=7.423269801 podStartE2EDuration="20.18058131s" podCreationTimestamp="2026-03-20 15:58:18 +0000 UTC" firstStartedPulling="2026-03-20 15:58:23.161587562 +0000 UTC m=+1162.374958931" lastFinishedPulling="2026-03-20 15:58:35.918899071 +0000 UTC m=+1175.132270440" observedRunningTime="2026-03-20 15:58:38.168072894 +0000 UTC m=+1177.381444273" watchObservedRunningTime="2026-03-20 15:58:38.18058131 +0000 UTC m=+1177.393952679" Mar 20 15:58:38 crc kubenswrapper[4730]: I0320 15:58:38.205399 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 15:58:39 crc kubenswrapper[4730]: I0320 15:58:39.115017 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1ba8c36f-1882-4bb3-bcb5-b3518ce35553","Type":"ContainerStarted","Data":"d603b5b2ea36b221ffaacb3a93618a8977298365b92770e11ecdfad53d65ccc9"} Mar 20 15:58:39 crc kubenswrapper[4730]: I0320 15:58:39.116861 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4938ac0e-1226-4f20-8f23-763b62b863c4","Type":"ContainerStarted","Data":"49b97730a0d8925f4b80deb7db0aeea30943ec9b5135c0345bbaf48837573e5f"} Mar 20 15:58:39 crc kubenswrapper[4730]: I0320 15:58:39.117889 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 20 15:58:39 crc kubenswrapper[4730]: I0320 15:58:39.120630 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3","Type":"ContainerStarted","Data":"13985a1e2e3d58d396be0af6437cdcdb0bbdea54308502442707c077b36e9713"} Mar 20 15:58:39 crc kubenswrapper[4730]: I0320 15:58:39.134095 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"df9ca02d-e20f-4f55-ba14-92b91812afb6","Type":"ContainerStarted","Data":"3d7e7a0cabaf1b38c1891734e06c9106a1e8c0af0454fdefaf773422d1dcf747"} Mar 20 15:58:39 crc kubenswrapper[4730]: I0320 15:58:39.137588 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"caa1db28-afc0-4abc-aa80-84cccb3d8412","Type":"ContainerStarted","Data":"eacc6920f5467c8cf478da7a75cba21c23e90e10fceb1cccd444a405fb378d06"} Mar 20 15:58:39 crc kubenswrapper[4730]: I0320 15:58:39.146805 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=10.32157212 podStartE2EDuration="25.146747851s" podCreationTimestamp="2026-03-20 15:58:14 +0000 UTC" firstStartedPulling="2026-03-20 15:58:23.035177151 +0000 UTC m=+1162.248548520" lastFinishedPulling="2026-03-20 15:58:37.860352882 +0000 UTC m=+1177.073724251" observedRunningTime="2026-03-20 15:58:39.139218617 +0000 UTC m=+1178.352589986" watchObservedRunningTime="2026-03-20 15:58:39.146747851 +0000 UTC m=+1178.360119220" Mar 20 15:58:40 crc kubenswrapper[4730]: I0320 15:58:40.146739 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8043f69c-832c-4afa-a9b9-211507664805","Type":"ContainerStarted","Data":"5873082b81a5b9253ac47bf2bf3866502e40b3ccab836a111c3bd8134e015ee5"} Mar 20 15:58:41 crc kubenswrapper[4730]: I0320 15:58:41.161618 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdd3845a-3723-438f-aa58-606451baed6c","Type":"ContainerStarted","Data":"e4820a88fffd97776afd3e8f20ce1473d0c4e99acb7cc9e56c9e53eaef07563a"} Mar 20 15:58:41 crc kubenswrapper[4730]: I0320 15:58:41.865386 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-ktnvd"] Mar 20 15:58:41 crc kubenswrapper[4730]: E0320 15:58:41.865899 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a5c1062-c366-4407-a395-cc3ad80ed296" containerName="init" Mar 20 15:58:41 crc kubenswrapper[4730]: I0320 15:58:41.865946 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a5c1062-c366-4407-a395-cc3ad80ed296" containerName="init" Mar 20 15:58:41 crc kubenswrapper[4730]: E0320 15:58:41.866018 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27dbbb52-2bd1-4e24-b621-128e7c880a2b" containerName="dnsmasq-dns" Mar 20 15:58:41 crc kubenswrapper[4730]: I0320 15:58:41.866026 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="27dbbb52-2bd1-4e24-b621-128e7c880a2b" containerName="dnsmasq-dns" Mar 20 15:58:41 crc kubenswrapper[4730]: E0320 15:58:41.866057 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27dbbb52-2bd1-4e24-b621-128e7c880a2b" containerName="init" Mar 20 15:58:41 crc kubenswrapper[4730]: I0320 15:58:41.866066 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="27dbbb52-2bd1-4e24-b621-128e7c880a2b" containerName="init" Mar 20 15:58:41 crc kubenswrapper[4730]: I0320 15:58:41.868859 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="27dbbb52-2bd1-4e24-b621-128e7c880a2b" containerName="dnsmasq-dns" Mar 20 15:58:41 crc kubenswrapper[4730]: I0320 15:58:41.868918 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a5c1062-c366-4407-a395-cc3ad80ed296" containerName="init" Mar 20 15:58:41 crc kubenswrapper[4730]: I0320 15:58:41.872782 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-ktnvd" Mar 20 15:58:41 crc kubenswrapper[4730]: I0320 15:58:41.875526 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 20 15:58:41 crc kubenswrapper[4730]: I0320 15:58:41.890087 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-ktnvd"] Mar 20 15:58:41 crc kubenswrapper[4730]: I0320 15:58:41.949913 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c615e3c6-d705-46e8-a1e7-c1c86df055f5-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ktnvd\" (UID: \"c615e3c6-d705-46e8-a1e7-c1c86df055f5\") " pod="openstack/ovn-controller-metrics-ktnvd" Mar 20 15:58:41 crc kubenswrapper[4730]: I0320 15:58:41.949994 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c615e3c6-d705-46e8-a1e7-c1c86df055f5-ovn-rundir\") pod \"ovn-controller-metrics-ktnvd\" (UID: \"c615e3c6-d705-46e8-a1e7-c1c86df055f5\") " pod="openstack/ovn-controller-metrics-ktnvd" Mar 20 15:58:41 crc kubenswrapper[4730]: I0320 15:58:41.950016 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c615e3c6-d705-46e8-a1e7-c1c86df055f5-combined-ca-bundle\") pod \"ovn-controller-metrics-ktnvd\" (UID: \"c615e3c6-d705-46e8-a1e7-c1c86df055f5\") " pod="openstack/ovn-controller-metrics-ktnvd" Mar 20 15:58:41 crc kubenswrapper[4730]: I0320 15:58:41.950123 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts9bc\" (UniqueName: \"kubernetes.io/projected/c615e3c6-d705-46e8-a1e7-c1c86df055f5-kube-api-access-ts9bc\") pod \"ovn-controller-metrics-ktnvd\" (UID: \"c615e3c6-d705-46e8-a1e7-c1c86df055f5\") " pod="openstack/ovn-controller-metrics-ktnvd" Mar 20 15:58:41 crc kubenswrapper[4730]: I0320 15:58:41.950365 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c615e3c6-d705-46e8-a1e7-c1c86df055f5-config\") pod \"ovn-controller-metrics-ktnvd\" (UID: \"c615e3c6-d705-46e8-a1e7-c1c86df055f5\") " pod="openstack/ovn-controller-metrics-ktnvd" Mar 20 15:58:41 crc kubenswrapper[4730]: I0320 15:58:41.950540 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c615e3c6-d705-46e8-a1e7-c1c86df055f5-ovs-rundir\") pod \"ovn-controller-metrics-ktnvd\" (UID: \"c615e3c6-d705-46e8-a1e7-c1c86df055f5\") " pod="openstack/ovn-controller-metrics-ktnvd" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.014293 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-88c65768c-zz9lq"] Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.015703 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-88c65768c-zz9lq" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.017472 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.029476 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-88c65768c-zz9lq"] Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.052492 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c615e3c6-d705-46e8-a1e7-c1c86df055f5-ovs-rundir\") pod \"ovn-controller-metrics-ktnvd\" (UID: \"c615e3c6-d705-46e8-a1e7-c1c86df055f5\") " pod="openstack/ovn-controller-metrics-ktnvd" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.052761 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c615e3c6-d705-46e8-a1e7-c1c86df055f5-ovs-rundir\") pod \"ovn-controller-metrics-ktnvd\" (UID: \"c615e3c6-d705-46e8-a1e7-c1c86df055f5\") " pod="openstack/ovn-controller-metrics-ktnvd" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.052838 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c615e3c6-d705-46e8-a1e7-c1c86df055f5-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ktnvd\" (UID: \"c615e3c6-d705-46e8-a1e7-c1c86df055f5\") " pod="openstack/ovn-controller-metrics-ktnvd" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.053600 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c615e3c6-d705-46e8-a1e7-c1c86df055f5-ovn-rundir\") pod \"ovn-controller-metrics-ktnvd\" (UID: \"c615e3c6-d705-46e8-a1e7-c1c86df055f5\") " pod="openstack/ovn-controller-metrics-ktnvd" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.053628 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c615e3c6-d705-46e8-a1e7-c1c86df055f5-combined-ca-bundle\") pod \"ovn-controller-metrics-ktnvd\" (UID: \"c615e3c6-d705-46e8-a1e7-c1c86df055f5\") " pod="openstack/ovn-controller-metrics-ktnvd" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.053648 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts9bc\" (UniqueName: \"kubernetes.io/projected/c615e3c6-d705-46e8-a1e7-c1c86df055f5-kube-api-access-ts9bc\") pod \"ovn-controller-metrics-ktnvd\" (UID: \"c615e3c6-d705-46e8-a1e7-c1c86df055f5\") " pod="openstack/ovn-controller-metrics-ktnvd" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.053696 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c615e3c6-d705-46e8-a1e7-c1c86df055f5-config\") pod \"ovn-controller-metrics-ktnvd\" (UID: \"c615e3c6-d705-46e8-a1e7-c1c86df055f5\") " pod="openstack/ovn-controller-metrics-ktnvd" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.053774 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c615e3c6-d705-46e8-a1e7-c1c86df055f5-ovn-rundir\") pod \"ovn-controller-metrics-ktnvd\" (UID: \"c615e3c6-d705-46e8-a1e7-c1c86df055f5\") " pod="openstack/ovn-controller-metrics-ktnvd" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.054238 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c615e3c6-d705-46e8-a1e7-c1c86df055f5-config\") pod \"ovn-controller-metrics-ktnvd\" (UID: \"c615e3c6-d705-46e8-a1e7-c1c86df055f5\") " pod="openstack/ovn-controller-metrics-ktnvd" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.068687 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c615e3c6-d705-46e8-a1e7-c1c86df055f5-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ktnvd\" (UID: \"c615e3c6-d705-46e8-a1e7-c1c86df055f5\") " pod="openstack/ovn-controller-metrics-ktnvd" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.069739 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c615e3c6-d705-46e8-a1e7-c1c86df055f5-combined-ca-bundle\") pod \"ovn-controller-metrics-ktnvd\" (UID: \"c615e3c6-d705-46e8-a1e7-c1c86df055f5\") " pod="openstack/ovn-controller-metrics-ktnvd" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.085630 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts9bc\" (UniqueName: \"kubernetes.io/projected/c615e3c6-d705-46e8-a1e7-c1c86df055f5-kube-api-access-ts9bc\") pod \"ovn-controller-metrics-ktnvd\" (UID: \"c615e3c6-d705-46e8-a1e7-c1c86df055f5\") " pod="openstack/ovn-controller-metrics-ktnvd" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.155517 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae8a0d34-0de8-464b-b533-a3eb56c320f6-ovsdbserver-nb\") pod \"dnsmasq-dns-88c65768c-zz9lq\" (UID: \"ae8a0d34-0de8-464b-b533-a3eb56c320f6\") " pod="openstack/dnsmasq-dns-88c65768c-zz9lq" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.155707 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae8a0d34-0de8-464b-b533-a3eb56c320f6-config\") pod \"dnsmasq-dns-88c65768c-zz9lq\" (UID: \"ae8a0d34-0de8-464b-b533-a3eb56c320f6\") " pod="openstack/dnsmasq-dns-88c65768c-zz9lq" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.155771 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87jfq\" (UniqueName: \"kubernetes.io/projected/ae8a0d34-0de8-464b-b533-a3eb56c320f6-kube-api-access-87jfq\") pod \"dnsmasq-dns-88c65768c-zz9lq\" (UID: \"ae8a0d34-0de8-464b-b533-a3eb56c320f6\") " pod="openstack/dnsmasq-dns-88c65768c-zz9lq" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.156011 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae8a0d34-0de8-464b-b533-a3eb56c320f6-dns-svc\") pod \"dnsmasq-dns-88c65768c-zz9lq\" (UID: \"ae8a0d34-0de8-464b-b533-a3eb56c320f6\") " pod="openstack/dnsmasq-dns-88c65768c-zz9lq" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.201508 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-ktnvd" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.208850 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-88c65768c-zz9lq"] Mar 20 15:58:42 crc kubenswrapper[4730]: E0320 15:58:42.209363 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-87jfq ovsdbserver-nb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-88c65768c-zz9lq" podUID="ae8a0d34-0de8-464b-b533-a3eb56c320f6" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.258114 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae8a0d34-0de8-464b-b533-a3eb56c320f6-config\") pod \"dnsmasq-dns-88c65768c-zz9lq\" (UID: \"ae8a0d34-0de8-464b-b533-a3eb56c320f6\") " pod="openstack/dnsmasq-dns-88c65768c-zz9lq" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.258166 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87jfq\" (UniqueName: \"kubernetes.io/projected/ae8a0d34-0de8-464b-b533-a3eb56c320f6-kube-api-access-87jfq\") pod \"dnsmasq-dns-88c65768c-zz9lq\" (UID: \"ae8a0d34-0de8-464b-b533-a3eb56c320f6\") " pod="openstack/dnsmasq-dns-88c65768c-zz9lq" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.258234 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae8a0d34-0de8-464b-b533-a3eb56c320f6-dns-svc\") pod \"dnsmasq-dns-88c65768c-zz9lq\" (UID: \"ae8a0d34-0de8-464b-b533-a3eb56c320f6\") " pod="openstack/dnsmasq-dns-88c65768c-zz9lq" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.258375 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae8a0d34-0de8-464b-b533-a3eb56c320f6-ovsdbserver-nb\") pod \"dnsmasq-dns-88c65768c-zz9lq\" (UID: \"ae8a0d34-0de8-464b-b533-a3eb56c320f6\") " pod="openstack/dnsmasq-dns-88c65768c-zz9lq" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.259875 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae8a0d34-0de8-464b-b533-a3eb56c320f6-dns-svc\") pod \"dnsmasq-dns-88c65768c-zz9lq\" (UID: \"ae8a0d34-0de8-464b-b533-a3eb56c320f6\") " pod="openstack/dnsmasq-dns-88c65768c-zz9lq" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.259945 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae8a0d34-0de8-464b-b533-a3eb56c320f6-config\") pod \"dnsmasq-dns-88c65768c-zz9lq\" (UID: \"ae8a0d34-0de8-464b-b533-a3eb56c320f6\") " pod="openstack/dnsmasq-dns-88c65768c-zz9lq" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.261861 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae8a0d34-0de8-464b-b533-a3eb56c320f6-ovsdbserver-nb\") pod \"dnsmasq-dns-88c65768c-zz9lq\" (UID: \"ae8a0d34-0de8-464b-b533-a3eb56c320f6\") " pod="openstack/dnsmasq-dns-88c65768c-zz9lq" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.268744 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-648686d659-c5gtt"] Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.270103 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-648686d659-c5gtt" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.271772 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.284113 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-648686d659-c5gtt"] Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.303720 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87jfq\" (UniqueName: \"kubernetes.io/projected/ae8a0d34-0de8-464b-b533-a3eb56c320f6-kube-api-access-87jfq\") pod \"dnsmasq-dns-88c65768c-zz9lq\" (UID: \"ae8a0d34-0de8-464b-b533-a3eb56c320f6\") " pod="openstack/dnsmasq-dns-88c65768c-zz9lq" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.359681 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68e4e5c3-825d-477e-a403-0cae45a86806-ovsdbserver-sb\") pod \"dnsmasq-dns-648686d659-c5gtt\" (UID: \"68e4e5c3-825d-477e-a403-0cae45a86806\") " pod="openstack/dnsmasq-dns-648686d659-c5gtt" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.359975 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68e4e5c3-825d-477e-a403-0cae45a86806-ovsdbserver-nb\") pod \"dnsmasq-dns-648686d659-c5gtt\" (UID: \"68e4e5c3-825d-477e-a403-0cae45a86806\") " pod="openstack/dnsmasq-dns-648686d659-c5gtt" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.360036 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shdlr\" (UniqueName: \"kubernetes.io/projected/68e4e5c3-825d-477e-a403-0cae45a86806-kube-api-access-shdlr\") pod \"dnsmasq-dns-648686d659-c5gtt\" (UID: \"68e4e5c3-825d-477e-a403-0cae45a86806\") " pod="openstack/dnsmasq-dns-648686d659-c5gtt" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.360080 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68e4e5c3-825d-477e-a403-0cae45a86806-config\") pod \"dnsmasq-dns-648686d659-c5gtt\" (UID: \"68e4e5c3-825d-477e-a403-0cae45a86806\") " pod="openstack/dnsmasq-dns-648686d659-c5gtt" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.360101 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68e4e5c3-825d-477e-a403-0cae45a86806-dns-svc\") pod \"dnsmasq-dns-648686d659-c5gtt\" (UID: \"68e4e5c3-825d-477e-a403-0cae45a86806\") " pod="openstack/dnsmasq-dns-648686d659-c5gtt" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.463536 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68e4e5c3-825d-477e-a403-0cae45a86806-ovsdbserver-nb\") pod \"dnsmasq-dns-648686d659-c5gtt\" (UID: \"68e4e5c3-825d-477e-a403-0cae45a86806\") " pod="openstack/dnsmasq-dns-648686d659-c5gtt" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.463645 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shdlr\" (UniqueName: \"kubernetes.io/projected/68e4e5c3-825d-477e-a403-0cae45a86806-kube-api-access-shdlr\") pod \"dnsmasq-dns-648686d659-c5gtt\" (UID: \"68e4e5c3-825d-477e-a403-0cae45a86806\") " pod="openstack/dnsmasq-dns-648686d659-c5gtt" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.463720 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68e4e5c3-825d-477e-a403-0cae45a86806-config\") pod \"dnsmasq-dns-648686d659-c5gtt\" (UID: \"68e4e5c3-825d-477e-a403-0cae45a86806\") " pod="openstack/dnsmasq-dns-648686d659-c5gtt" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.463746 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68e4e5c3-825d-477e-a403-0cae45a86806-dns-svc\") pod \"dnsmasq-dns-648686d659-c5gtt\" (UID: \"68e4e5c3-825d-477e-a403-0cae45a86806\") " pod="openstack/dnsmasq-dns-648686d659-c5gtt" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.463813 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68e4e5c3-825d-477e-a403-0cae45a86806-ovsdbserver-sb\") pod \"dnsmasq-dns-648686d659-c5gtt\" (UID: \"68e4e5c3-825d-477e-a403-0cae45a86806\") " pod="openstack/dnsmasq-dns-648686d659-c5gtt" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.464831 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68e4e5c3-825d-477e-a403-0cae45a86806-ovsdbserver-sb\") pod \"dnsmasq-dns-648686d659-c5gtt\" (UID: \"68e4e5c3-825d-477e-a403-0cae45a86806\") " pod="openstack/dnsmasq-dns-648686d659-c5gtt" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.465464 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68e4e5c3-825d-477e-a403-0cae45a86806-ovsdbserver-nb\") pod \"dnsmasq-dns-648686d659-c5gtt\" (UID: \"68e4e5c3-825d-477e-a403-0cae45a86806\") " pod="openstack/dnsmasq-dns-648686d659-c5gtt" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.466594 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68e4e5c3-825d-477e-a403-0cae45a86806-config\") pod \"dnsmasq-dns-648686d659-c5gtt\" (UID: \"68e4e5c3-825d-477e-a403-0cae45a86806\") " pod="openstack/dnsmasq-dns-648686d659-c5gtt" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.467375 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68e4e5c3-825d-477e-a403-0cae45a86806-dns-svc\") pod \"dnsmasq-dns-648686d659-c5gtt\" (UID: \"68e4e5c3-825d-477e-a403-0cae45a86806\") " pod="openstack/dnsmasq-dns-648686d659-c5gtt" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.482613 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shdlr\" (UniqueName: \"kubernetes.io/projected/68e4e5c3-825d-477e-a403-0cae45a86806-kube-api-access-shdlr\") pod \"dnsmasq-dns-648686d659-c5gtt\" (UID: \"68e4e5c3-825d-477e-a403-0cae45a86806\") " pod="openstack/dnsmasq-dns-648686d659-c5gtt" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.695734 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-648686d659-c5gtt" Mar 20 15:58:42 crc kubenswrapper[4730]: I0320 15:58:42.713436 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-ktnvd"] Mar 20 15:58:42 crc kubenswrapper[4730]: W0320 15:58:42.749211 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc615e3c6_d705_46e8_a1e7_c1c86df055f5.slice/crio-0e876e10d4381f7f141a114030c7addb898e1a59d3f1686111f8b89ec552304b WatchSource:0}: Error finding container 0e876e10d4381f7f141a114030c7addb898e1a59d3f1686111f8b89ec552304b: Status 404 returned error can't find the container with id 0e876e10d4381f7f141a114030c7addb898e1a59d3f1686111f8b89ec552304b Mar 20 15:58:43 crc kubenswrapper[4730]: I0320 15:58:43.155355 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-648686d659-c5gtt"] Mar 20 15:58:43 crc kubenswrapper[4730]: I0320 15:58:43.173646 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-ktnvd" event={"ID":"c615e3c6-d705-46e8-a1e7-c1c86df055f5","Type":"ContainerStarted","Data":"547d6dc86725d6e5ac2298d66636d1679015d01ffce4088ea8ef37a09f0ab2df"} Mar 20 15:58:43 crc kubenswrapper[4730]: I0320 15:58:43.173691 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-ktnvd" event={"ID":"c615e3c6-d705-46e8-a1e7-c1c86df055f5","Type":"ContainerStarted","Data":"0e876e10d4381f7f141a114030c7addb898e1a59d3f1686111f8b89ec552304b"} Mar 20 15:58:43 crc kubenswrapper[4730]: I0320 15:58:43.177411 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"caa1db28-afc0-4abc-aa80-84cccb3d8412","Type":"ContainerStarted","Data":"6320c4e2aa45fff9f52c077f4d0c3c449c955705c38ad50993cfbe211a48dbde"} Mar 20 15:58:43 crc kubenswrapper[4730]: I0320 15:58:43.181603 4730 generic.go:334] "Generic (PLEG): container finished" podID="899bd9ae-9354-4e70-ad37-b438a5a33a24" containerID="e083a94b7b0bad04baccc30cbbd4595d9bfb295fd7de2e8fc839c48d6d9ed2c7" exitCode=0 Mar 20 15:58:43 crc kubenswrapper[4730]: I0320 15:58:43.181832 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"899bd9ae-9354-4e70-ad37-b438a5a33a24","Type":"ContainerDied","Data":"e083a94b7b0bad04baccc30cbbd4595d9bfb295fd7de2e8fc839c48d6d9ed2c7"} Mar 20 15:58:43 crc kubenswrapper[4730]: I0320 15:58:43.188340 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-88c65768c-zz9lq" Mar 20 15:58:43 crc kubenswrapper[4730]: I0320 15:58:43.189267 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1ba8c36f-1882-4bb3-bcb5-b3518ce35553","Type":"ContainerStarted","Data":"a49057e55a1263fc2d8b9406b94c75356e303544000ffcd8d57b91b5449f641f"} Mar 20 15:58:43 crc kubenswrapper[4730]: I0320 15:58:43.204650 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-ktnvd" podStartSLOduration=2.204631393 podStartE2EDuration="2.204631393s" podCreationTimestamp="2026-03-20 15:58:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:58:43.192117588 +0000 UTC m=+1182.405488957" watchObservedRunningTime="2026-03-20 15:58:43.204631393 +0000 UTC m=+1182.418002762" Mar 20 15:58:43 crc kubenswrapper[4730]: I0320 15:58:43.207145 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:43 crc kubenswrapper[4730]: I0320 15:58:43.239271 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=8.113116779 podStartE2EDuration="26.239242167s" podCreationTimestamp="2026-03-20 15:58:17 +0000 UTC" firstStartedPulling="2026-03-20 15:58:24.197441332 +0000 UTC m=+1163.410812701" lastFinishedPulling="2026-03-20 15:58:42.32356672 +0000 UTC m=+1181.536938089" observedRunningTime="2026-03-20 15:58:43.229121599 +0000 UTC m=+1182.442492968" watchObservedRunningTime="2026-03-20 15:58:43.239242167 +0000 UTC m=+1182.452613526" Mar 20 15:58:43 crc kubenswrapper[4730]: I0320 15:58:43.262450 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:43 crc kubenswrapper[4730]: I0320 15:58:43.300390 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-88c65768c-zz9lq" Mar 20 15:58:43 crc kubenswrapper[4730]: I0320 15:58:43.315698 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.187637184 podStartE2EDuration="23.315682628s" podCreationTimestamp="2026-03-20 15:58:20 +0000 UTC" firstStartedPulling="2026-03-20 15:58:23.264532657 +0000 UTC m=+1162.477904026" lastFinishedPulling="2026-03-20 15:58:42.392578101 +0000 UTC m=+1181.605949470" observedRunningTime="2026-03-20 15:58:43.298419508 +0000 UTC m=+1182.511790877" watchObservedRunningTime="2026-03-20 15:58:43.315682628 +0000 UTC m=+1182.529053997" Mar 20 15:58:43 crc kubenswrapper[4730]: I0320 15:58:43.482302 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae8a0d34-0de8-464b-b533-a3eb56c320f6-dns-svc\") pod \"ae8a0d34-0de8-464b-b533-a3eb56c320f6\" (UID: \"ae8a0d34-0de8-464b-b533-a3eb56c320f6\") " Mar 20 15:58:43 crc kubenswrapper[4730]: I0320 15:58:43.482400 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae8a0d34-0de8-464b-b533-a3eb56c320f6-ovsdbserver-nb\") pod \"ae8a0d34-0de8-464b-b533-a3eb56c320f6\" (UID: \"ae8a0d34-0de8-464b-b533-a3eb56c320f6\") " Mar 20 15:58:43 crc kubenswrapper[4730]: I0320 15:58:43.482445 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87jfq\" (UniqueName: \"kubernetes.io/projected/ae8a0d34-0de8-464b-b533-a3eb56c320f6-kube-api-access-87jfq\") pod \"ae8a0d34-0de8-464b-b533-a3eb56c320f6\" (UID: \"ae8a0d34-0de8-464b-b533-a3eb56c320f6\") " Mar 20 15:58:43 crc kubenswrapper[4730]: I0320 15:58:43.482517 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae8a0d34-0de8-464b-b533-a3eb56c320f6-config\") pod \"ae8a0d34-0de8-464b-b533-a3eb56c320f6\" (UID: \"ae8a0d34-0de8-464b-b533-a3eb56c320f6\") " Mar 20 15:58:43 crc kubenswrapper[4730]: I0320 15:58:43.482860 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae8a0d34-0de8-464b-b533-a3eb56c320f6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ae8a0d34-0de8-464b-b533-a3eb56c320f6" (UID: "ae8a0d34-0de8-464b-b533-a3eb56c320f6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:58:43 crc kubenswrapper[4730]: I0320 15:58:43.483015 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae8a0d34-0de8-464b-b533-a3eb56c320f6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ae8a0d34-0de8-464b-b533-a3eb56c320f6" (UID: "ae8a0d34-0de8-464b-b533-a3eb56c320f6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:58:43 crc kubenswrapper[4730]: I0320 15:58:43.483269 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae8a0d34-0de8-464b-b533-a3eb56c320f6-config" (OuterVolumeSpecName: "config") pod "ae8a0d34-0de8-464b-b533-a3eb56c320f6" (UID: "ae8a0d34-0de8-464b-b533-a3eb56c320f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:58:43 crc kubenswrapper[4730]: I0320 15:58:43.487863 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae8a0d34-0de8-464b-b533-a3eb56c320f6-kube-api-access-87jfq" (OuterVolumeSpecName: "kube-api-access-87jfq") pod "ae8a0d34-0de8-464b-b533-a3eb56c320f6" (UID: "ae8a0d34-0de8-464b-b533-a3eb56c320f6"). InnerVolumeSpecName "kube-api-access-87jfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:58:43 crc kubenswrapper[4730]: I0320 15:58:43.584094 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae8a0d34-0de8-464b-b533-a3eb56c320f6-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:43 crc kubenswrapper[4730]: I0320 15:58:43.584127 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae8a0d34-0de8-464b-b533-a3eb56c320f6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:43 crc kubenswrapper[4730]: I0320 15:58:43.584139 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87jfq\" (UniqueName: \"kubernetes.io/projected/ae8a0d34-0de8-464b-b533-a3eb56c320f6-kube-api-access-87jfq\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:43 crc kubenswrapper[4730]: I0320 15:58:43.584148 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae8a0d34-0de8-464b-b533-a3eb56c320f6-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:43 crc kubenswrapper[4730]: I0320 15:58:43.801355 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:44 crc kubenswrapper[4730]: I0320 15:58:44.198208 4730 generic.go:334] "Generic (PLEG): container finished" podID="68e4e5c3-825d-477e-a403-0cae45a86806" containerID="c68e3d34495e1743eae1f1377e46b36b943735fc7af94f6c4a34f2182614de74" exitCode=0 Mar 20 15:58:44 crc kubenswrapper[4730]: I0320 15:58:44.198345 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-648686d659-c5gtt" event={"ID":"68e4e5c3-825d-477e-a403-0cae45a86806","Type":"ContainerDied","Data":"c68e3d34495e1743eae1f1377e46b36b943735fc7af94f6c4a34f2182614de74"} Mar 20 15:58:44 crc kubenswrapper[4730]: I0320 15:58:44.198374 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-648686d659-c5gtt" event={"ID":"68e4e5c3-825d-477e-a403-0cae45a86806","Type":"ContainerStarted","Data":"4818c9171456a6ff54f39a98823d5ad13ee602182702fbabd7963c0de43a3bf8"} Mar 20 15:58:44 crc kubenswrapper[4730]: I0320 15:58:44.200119 4730 generic.go:334] "Generic (PLEG): container finished" podID="6abf778f-200f-4d48-97b6-08a638b4efa2" containerID="dc0fdc1377be29d3b722fe6d6bb77fb846f03d0f1df47f3a8d806d50aa0f28ac" exitCode=0 Mar 20 15:58:44 crc kubenswrapper[4730]: I0320 15:58:44.200165 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6abf778f-200f-4d48-97b6-08a638b4efa2","Type":"ContainerDied","Data":"dc0fdc1377be29d3b722fe6d6bb77fb846f03d0f1df47f3a8d806d50aa0f28ac"} Mar 20 15:58:44 crc kubenswrapper[4730]: I0320 15:58:44.202387 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"899bd9ae-9354-4e70-ad37-b438a5a33a24","Type":"ContainerStarted","Data":"e3a213186b82a3bbc61ecd10d2f4aed4539fa2830e21ca4c0bb702d6f7bc640b"} Mar 20 15:58:44 crc kubenswrapper[4730]: I0320 15:58:44.202491 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-88c65768c-zz9lq" Mar 20 15:58:44 crc kubenswrapper[4730]: I0320 15:58:44.204293 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:44 crc kubenswrapper[4730]: I0320 15:58:44.274335 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-88c65768c-zz9lq"] Mar 20 15:58:44 crc kubenswrapper[4730]: I0320 15:58:44.288166 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-88c65768c-zz9lq"] Mar 20 15:58:44 crc kubenswrapper[4730]: I0320 15:58:44.288244 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 20 15:58:44 crc kubenswrapper[4730]: I0320 15:58:44.317323 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=20.416300375 podStartE2EDuration="33.317303697s" podCreationTimestamp="2026-03-20 15:58:11 +0000 UTC" firstStartedPulling="2026-03-20 15:58:23.116488861 +0000 UTC m=+1162.329860230" lastFinishedPulling="2026-03-20 15:58:36.017492173 +0000 UTC m=+1175.230863552" observedRunningTime="2026-03-20 15:58:44.312754857 +0000 UTC m=+1183.526126236" watchObservedRunningTime="2026-03-20 15:58:44.317303697 +0000 UTC m=+1183.530675066" Mar 20 15:58:45 crc kubenswrapper[4730]: I0320 15:58:45.211273 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6abf778f-200f-4d48-97b6-08a638b4efa2","Type":"ContainerStarted","Data":"1dca84c6132b08065c2467e557f8c71c754b08f037a4e15bc1f4b58a30cfcf02"} Mar 20 15:58:45 crc kubenswrapper[4730]: I0320 15:58:45.213806 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-648686d659-c5gtt" event={"ID":"68e4e5c3-825d-477e-a403-0cae45a86806","Type":"ContainerStarted","Data":"fe73524182281c96c8ff70a33be500f38666a259e3cec724e30e6c153a1251ae"} Mar 20 15:58:45 crc kubenswrapper[4730]: I0320 15:58:45.214106 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-648686d659-c5gtt" Mar 20 15:58:45 crc kubenswrapper[4730]: I0320 15:58:45.220933 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 20 15:58:45 crc kubenswrapper[4730]: I0320 15:58:45.251800 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=21.862043826 podStartE2EDuration="36.251779926s" podCreationTimestamp="2026-03-20 15:58:09 +0000 UTC" firstStartedPulling="2026-03-20 15:58:22.199041764 +0000 UTC m=+1161.412413133" lastFinishedPulling="2026-03-20 15:58:36.588777864 +0000 UTC m=+1175.802149233" observedRunningTime="2026-03-20 15:58:45.238074978 +0000 UTC m=+1184.451446347" watchObservedRunningTime="2026-03-20 15:58:45.251779926 +0000 UTC m=+1184.465151295" Mar 20 15:58:45 crc kubenswrapper[4730]: I0320 15:58:45.542867 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae8a0d34-0de8-464b-b533-a3eb56c320f6" path="/var/lib/kubelet/pods/ae8a0d34-0de8-464b-b533-a3eb56c320f6/volumes" Mar 20 15:58:45 crc kubenswrapper[4730]: I0320 15:58:45.800972 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:45 crc kubenswrapper[4730]: I0320 15:58:45.840886 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:45 crc kubenswrapper[4730]: I0320 15:58:45.865594 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-648686d659-c5gtt" podStartSLOduration=3.865573805 podStartE2EDuration="3.865573805s" podCreationTimestamp="2026-03-20 15:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:58:45.303870716 +0000 UTC m=+1184.517242085" watchObservedRunningTime="2026-03-20 15:58:45.865573805 +0000 UTC m=+1185.078945174" Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.222002 4730 generic.go:334] "Generic (PLEG): container finished" podID="fdd3845a-3723-438f-aa58-606451baed6c" containerID="e4820a88fffd97776afd3e8f20ce1473d0c4e99acb7cc9e56c9e53eaef07563a" exitCode=0 Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.222079 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdd3845a-3723-438f-aa58-606451baed6c","Type":"ContainerDied","Data":"e4820a88fffd97776afd3e8f20ce1473d0c4e99acb7cc9e56c9e53eaef07563a"} Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.269810 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.525933 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.527384 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.529389 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.529785 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.530084 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.530448 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-pqq9k" Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.541785 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.632814 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cb9ef9a-6d98-43c1-8e74-7f24ba39357d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4cb9ef9a-6d98-43c1-8e74-7f24ba39357d\") " pod="openstack/ovn-northd-0" Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.632861 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cb9ef9a-6d98-43c1-8e74-7f24ba39357d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4cb9ef9a-6d98-43c1-8e74-7f24ba39357d\") " pod="openstack/ovn-northd-0" Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.632909 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cb9ef9a-6d98-43c1-8e74-7f24ba39357d-scripts\") pod \"ovn-northd-0\" (UID: \"4cb9ef9a-6d98-43c1-8e74-7f24ba39357d\") " pod="openstack/ovn-northd-0" Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.632932 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cb9ef9a-6d98-43c1-8e74-7f24ba39357d-config\") pod \"ovn-northd-0\" (UID: \"4cb9ef9a-6d98-43c1-8e74-7f24ba39357d\") " pod="openstack/ovn-northd-0" Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.632971 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cb9ef9a-6d98-43c1-8e74-7f24ba39357d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4cb9ef9a-6d98-43c1-8e74-7f24ba39357d\") " pod="openstack/ovn-northd-0" Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.633038 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4v6j\" (UniqueName: \"kubernetes.io/projected/4cb9ef9a-6d98-43c1-8e74-7f24ba39357d-kube-api-access-p4v6j\") pod \"ovn-northd-0\" (UID: \"4cb9ef9a-6d98-43c1-8e74-7f24ba39357d\") " pod="openstack/ovn-northd-0" Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.633053 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4cb9ef9a-6d98-43c1-8e74-7f24ba39357d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4cb9ef9a-6d98-43c1-8e74-7f24ba39357d\") " pod="openstack/ovn-northd-0" Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.734268 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cb9ef9a-6d98-43c1-8e74-7f24ba39357d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4cb9ef9a-6d98-43c1-8e74-7f24ba39357d\") " pod="openstack/ovn-northd-0" Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.734531 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cb9ef9a-6d98-43c1-8e74-7f24ba39357d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4cb9ef9a-6d98-43c1-8e74-7f24ba39357d\") " pod="openstack/ovn-northd-0" Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.734566 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cb9ef9a-6d98-43c1-8e74-7f24ba39357d-scripts\") pod \"ovn-northd-0\" (UID: \"4cb9ef9a-6d98-43c1-8e74-7f24ba39357d\") " pod="openstack/ovn-northd-0" Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.734583 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cb9ef9a-6d98-43c1-8e74-7f24ba39357d-config\") pod \"ovn-northd-0\" (UID: \"4cb9ef9a-6d98-43c1-8e74-7f24ba39357d\") " pod="openstack/ovn-northd-0" Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.734603 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cb9ef9a-6d98-43c1-8e74-7f24ba39357d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4cb9ef9a-6d98-43c1-8e74-7f24ba39357d\") " pod="openstack/ovn-northd-0" Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.734637 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4v6j\" (UniqueName: \"kubernetes.io/projected/4cb9ef9a-6d98-43c1-8e74-7f24ba39357d-kube-api-access-p4v6j\") pod \"ovn-northd-0\" (UID: \"4cb9ef9a-6d98-43c1-8e74-7f24ba39357d\") " pod="openstack/ovn-northd-0" Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.734655 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4cb9ef9a-6d98-43c1-8e74-7f24ba39357d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4cb9ef9a-6d98-43c1-8e74-7f24ba39357d\") " pod="openstack/ovn-northd-0" Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.735031 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4cb9ef9a-6d98-43c1-8e74-7f24ba39357d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4cb9ef9a-6d98-43c1-8e74-7f24ba39357d\") " pod="openstack/ovn-northd-0" Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.735607 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cb9ef9a-6d98-43c1-8e74-7f24ba39357d-config\") pod \"ovn-northd-0\" (UID: \"4cb9ef9a-6d98-43c1-8e74-7f24ba39357d\") " pod="openstack/ovn-northd-0" Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.735723 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cb9ef9a-6d98-43c1-8e74-7f24ba39357d-scripts\") pod \"ovn-northd-0\" (UID: \"4cb9ef9a-6d98-43c1-8e74-7f24ba39357d\") " pod="openstack/ovn-northd-0" Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.738988 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cb9ef9a-6d98-43c1-8e74-7f24ba39357d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4cb9ef9a-6d98-43c1-8e74-7f24ba39357d\") " pod="openstack/ovn-northd-0" Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.739164 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cb9ef9a-6d98-43c1-8e74-7f24ba39357d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4cb9ef9a-6d98-43c1-8e74-7f24ba39357d\") " pod="openstack/ovn-northd-0" Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.739410 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cb9ef9a-6d98-43c1-8e74-7f24ba39357d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4cb9ef9a-6d98-43c1-8e74-7f24ba39357d\") " pod="openstack/ovn-northd-0" Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.757906 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4v6j\" (UniqueName: \"kubernetes.io/projected/4cb9ef9a-6d98-43c1-8e74-7f24ba39357d-kube-api-access-p4v6j\") pod \"ovn-northd-0\" (UID: \"4cb9ef9a-6d98-43c1-8e74-7f24ba39357d\") " pod="openstack/ovn-northd-0" Mar 20 15:58:46 crc kubenswrapper[4730]: I0320 15:58:46.855580 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 20 15:58:47 crc kubenswrapper[4730]: I0320 15:58:47.232017 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"84bbdebb-43de-41d6-82d4-71b0948c25f8","Type":"ContainerStarted","Data":"1f77c44861846d42519203e36df51ca7effaf39c8d087cdf7c5bea5217b97755"} Mar 20 15:58:47 crc kubenswrapper[4730]: I0320 15:58:47.232421 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 20 15:58:47 crc kubenswrapper[4730]: I0320 15:58:47.251047 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=11.433395575 podStartE2EDuration="35.251031119s" podCreationTimestamp="2026-03-20 15:58:12 +0000 UTC" firstStartedPulling="2026-03-20 15:58:22.784471978 +0000 UTC m=+1161.997843347" lastFinishedPulling="2026-03-20 15:58:46.602107522 +0000 UTC m=+1185.815478891" observedRunningTime="2026-03-20 15:58:47.249855675 +0000 UTC m=+1186.463227044" watchObservedRunningTime="2026-03-20 15:58:47.251031119 +0000 UTC m=+1186.464402478" Mar 20 15:58:47 crc kubenswrapper[4730]: I0320 15:58:47.375395 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 20 15:58:48 crc kubenswrapper[4730]: E0320 15:58:48.216610 4730 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.162:46540->38.102.83.162:41077: write tcp 38.102.83.162:46540->38.102.83.162:41077: write: broken pipe Mar 20 15:58:48 crc kubenswrapper[4730]: I0320 15:58:48.241877 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4cb9ef9a-6d98-43c1-8e74-7f24ba39357d","Type":"ContainerStarted","Data":"3b3e82e8c77b75fcffe50c4ddfbc5b744895a2b18d1f5d0d563a3717927f7732"} Mar 20 15:58:48 crc kubenswrapper[4730]: I0320 15:58:48.241915 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4cb9ef9a-6d98-43c1-8e74-7f24ba39357d","Type":"ContainerStarted","Data":"69dd39cf6401b28114536c13981581ad290f1afdc3ad0ec83535c5b6d2123db8"} Mar 20 15:58:49 crc kubenswrapper[4730]: I0320 15:58:49.249762 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4cb9ef9a-6d98-43c1-8e74-7f24ba39357d","Type":"ContainerStarted","Data":"964e8576b3554110ae20790b5688f31bd138746068e3fe025ffb791e18a75b8e"} Mar 20 15:58:49 crc kubenswrapper[4730]: I0320 15:58:49.250848 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 20 15:58:49 crc kubenswrapper[4730]: I0320 15:58:49.269960 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.67340561 podStartE2EDuration="3.269942769s" podCreationTimestamp="2026-03-20 15:58:46 +0000 UTC" firstStartedPulling="2026-03-20 15:58:47.38025871 +0000 UTC m=+1186.593630079" lastFinishedPulling="2026-03-20 15:58:47.976795869 +0000 UTC m=+1187.190167238" observedRunningTime="2026-03-20 15:58:49.269843117 +0000 UTC m=+1188.483214486" watchObservedRunningTime="2026-03-20 15:58:49.269942769 +0000 UTC m=+1188.483314138" Mar 20 15:58:51 crc kubenswrapper[4730]: I0320 15:58:51.131073 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 20 15:58:51 crc kubenswrapper[4730]: I0320 15:58:51.131460 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 20 15:58:51 crc kubenswrapper[4730]: I0320 15:58:51.240155 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 20 15:58:51 crc kubenswrapper[4730]: I0320 15:58:51.360203 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 20 15:58:52 crc kubenswrapper[4730]: I0320 15:58:52.569517 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:52 crc kubenswrapper[4730]: I0320 15:58:52.569858 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:52 crc kubenswrapper[4730]: I0320 15:58:52.668551 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 20 15:58:52 crc kubenswrapper[4730]: I0320 15:58:52.701564 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-648686d659-c5gtt" Mar 20 15:58:52 crc kubenswrapper[4730]: I0320 15:58:52.707973 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:52 crc kubenswrapper[4730]: I0320 15:58:52.777456 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74bcc47849-2r2xb"] Mar 20 15:58:52 crc kubenswrapper[4730]: I0320 15:58:52.777761 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74bcc47849-2r2xb" podUID="a5e88dae-c3fd-456c-92c6-3bc143b5a399" containerName="dnsmasq-dns" containerID="cri-o://15b7107144e36c5c008efc65f73b4c3e5a1b51a4b8a473402aba592b6c7329e1" gracePeriod=10 Mar 20 15:58:52 crc kubenswrapper[4730]: I0320 15:58:52.923320 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e285-account-create-update-6wk66"] Mar 20 15:58:52 crc kubenswrapper[4730]: I0320 15:58:52.924544 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e285-account-create-update-6wk66" Mar 20 15:58:52 crc kubenswrapper[4730]: I0320 15:58:52.926196 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 20 15:58:52 crc kubenswrapper[4730]: I0320 15:58:52.941430 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e285-account-create-update-6wk66"] Mar 20 15:58:52 crc kubenswrapper[4730]: I0320 15:58:52.987692 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-bjqvh"] Mar 20 15:58:52 crc kubenswrapper[4730]: I0320 15:58:52.988895 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bjqvh" Mar 20 15:58:53 crc kubenswrapper[4730]: I0320 15:58:53.006648 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-bjqvh"] Mar 20 15:58:53 crc kubenswrapper[4730]: I0320 15:58:53.050011 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16da1663-821b-4e05-95f6-df67e9fac962-operator-scripts\") pod \"glance-e285-account-create-update-6wk66\" (UID: \"16da1663-821b-4e05-95f6-df67e9fac962\") " pod="openstack/glance-e285-account-create-update-6wk66" Mar 20 15:58:53 crc kubenswrapper[4730]: I0320 15:58:53.050110 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfcdj\" (UniqueName: \"kubernetes.io/projected/16da1663-821b-4e05-95f6-df67e9fac962-kube-api-access-xfcdj\") pod \"glance-e285-account-create-update-6wk66\" (UID: \"16da1663-821b-4e05-95f6-df67e9fac962\") " pod="openstack/glance-e285-account-create-update-6wk66" Mar 20 15:58:53 crc kubenswrapper[4730]: I0320 15:58:53.151290 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17c0870c-17e5-4bd4-91b1-a8df134a4fbd-operator-scripts\") pod \"glance-db-create-bjqvh\" (UID: \"17c0870c-17e5-4bd4-91b1-a8df134a4fbd\") " pod="openstack/glance-db-create-bjqvh" Mar 20 15:58:53 crc kubenswrapper[4730]: I0320 15:58:53.151360 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16da1663-821b-4e05-95f6-df67e9fac962-operator-scripts\") pod \"glance-e285-account-create-update-6wk66\" (UID: \"16da1663-821b-4e05-95f6-df67e9fac962\") " pod="openstack/glance-e285-account-create-update-6wk66" Mar 20 15:58:53 crc kubenswrapper[4730]: I0320 15:58:53.151398 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfcdj\" (UniqueName: \"kubernetes.io/projected/16da1663-821b-4e05-95f6-df67e9fac962-kube-api-access-xfcdj\") pod \"glance-e285-account-create-update-6wk66\" (UID: \"16da1663-821b-4e05-95f6-df67e9fac962\") " pod="openstack/glance-e285-account-create-update-6wk66" Mar 20 15:58:53 crc kubenswrapper[4730]: I0320 15:58:53.151477 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj7ws\" (UniqueName: \"kubernetes.io/projected/17c0870c-17e5-4bd4-91b1-a8df134a4fbd-kube-api-access-hj7ws\") pod \"glance-db-create-bjqvh\" (UID: \"17c0870c-17e5-4bd4-91b1-a8df134a4fbd\") " pod="openstack/glance-db-create-bjqvh" Mar 20 15:58:53 crc kubenswrapper[4730]: I0320 15:58:53.152447 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16da1663-821b-4e05-95f6-df67e9fac962-operator-scripts\") pod \"glance-e285-account-create-update-6wk66\" (UID: \"16da1663-821b-4e05-95f6-df67e9fac962\") " pod="openstack/glance-e285-account-create-update-6wk66" Mar 20 15:58:53 crc kubenswrapper[4730]: I0320 15:58:53.176153 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfcdj\" (UniqueName: \"kubernetes.io/projected/16da1663-821b-4e05-95f6-df67e9fac962-kube-api-access-xfcdj\") pod \"glance-e285-account-create-update-6wk66\" (UID: \"16da1663-821b-4e05-95f6-df67e9fac962\") " pod="openstack/glance-e285-account-create-update-6wk66" Mar 20 15:58:53 crc kubenswrapper[4730]: I0320 15:58:53.244685 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e285-account-create-update-6wk66" Mar 20 15:58:53 crc kubenswrapper[4730]: I0320 15:58:53.253045 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17c0870c-17e5-4bd4-91b1-a8df134a4fbd-operator-scripts\") pod \"glance-db-create-bjqvh\" (UID: \"17c0870c-17e5-4bd4-91b1-a8df134a4fbd\") " pod="openstack/glance-db-create-bjqvh" Mar 20 15:58:53 crc kubenswrapper[4730]: I0320 15:58:53.253175 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj7ws\" (UniqueName: \"kubernetes.io/projected/17c0870c-17e5-4bd4-91b1-a8df134a4fbd-kube-api-access-hj7ws\") pod \"glance-db-create-bjqvh\" (UID: \"17c0870c-17e5-4bd4-91b1-a8df134a4fbd\") " pod="openstack/glance-db-create-bjqvh" Mar 20 15:58:53 crc kubenswrapper[4730]: I0320 15:58:53.253684 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17c0870c-17e5-4bd4-91b1-a8df134a4fbd-operator-scripts\") pod \"glance-db-create-bjqvh\" (UID: \"17c0870c-17e5-4bd4-91b1-a8df134a4fbd\") " pod="openstack/glance-db-create-bjqvh" Mar 20 15:58:53 crc kubenswrapper[4730]: I0320 15:58:53.269995 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj7ws\" (UniqueName: \"kubernetes.io/projected/17c0870c-17e5-4bd4-91b1-a8df134a4fbd-kube-api-access-hj7ws\") pod \"glance-db-create-bjqvh\" (UID: \"17c0870c-17e5-4bd4-91b1-a8df134a4fbd\") " pod="openstack/glance-db-create-bjqvh" Mar 20 15:58:53 crc kubenswrapper[4730]: I0320 15:58:53.285594 4730 generic.go:334] "Generic (PLEG): container finished" podID="a5e88dae-c3fd-456c-92c6-3bc143b5a399" containerID="15b7107144e36c5c008efc65f73b4c3e5a1b51a4b8a473402aba592b6c7329e1" exitCode=0 Mar 20 15:58:53 crc kubenswrapper[4730]: I0320 15:58:53.286517 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74bcc47849-2r2xb" event={"ID":"a5e88dae-c3fd-456c-92c6-3bc143b5a399","Type":"ContainerDied","Data":"15b7107144e36c5c008efc65f73b4c3e5a1b51a4b8a473402aba592b6c7329e1"} Mar 20 15:58:53 crc kubenswrapper[4730]: I0320 15:58:53.310561 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bjqvh" Mar 20 15:58:53 crc kubenswrapper[4730]: I0320 15:58:53.437932 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 20 15:58:53 crc kubenswrapper[4730]: I0320 15:58:53.595555 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74bcc47849-2r2xb" podUID="a5e88dae-c3fd-456c-92c6-3bc143b5a399" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.108:5353: connect: connection refused" Mar 20 15:58:53 crc kubenswrapper[4730]: I0320 15:58:53.688889 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e285-account-create-update-6wk66"] Mar 20 15:58:53 crc kubenswrapper[4730]: W0320 15:58:53.700239 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16da1663_821b_4e05_95f6_df67e9fac962.slice/crio-8583b53c13d80cd0982656c0b75867dafc594f16bd85e802824264387cf98f7a WatchSource:0}: Error finding container 8583b53c13d80cd0982656c0b75867dafc594f16bd85e802824264387cf98f7a: Status 404 returned error can't find the container with id 8583b53c13d80cd0982656c0b75867dafc594f16bd85e802824264387cf98f7a Mar 20 15:58:53 crc kubenswrapper[4730]: I0320 15:58:53.805870 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-bjqvh"] Mar 20 15:58:53 crc kubenswrapper[4730]: W0320 15:58:53.813835 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17c0870c_17e5_4bd4_91b1_a8df134a4fbd.slice/crio-758d7b72caef7e5c055044d07ddf94f810e0aec3dcd8ba6e54797d6869ccee1f WatchSource:0}: Error finding container 758d7b72caef7e5c055044d07ddf94f810e0aec3dcd8ba6e54797d6869ccee1f: Status 404 returned error can't find the container with id 758d7b72caef7e5c055044d07ddf94f810e0aec3dcd8ba6e54797d6869ccee1f Mar 20 15:58:53 crc kubenswrapper[4730]: I0320 15:58:53.989883 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-fvknw"] Mar 20 15:58:53 crc kubenswrapper[4730]: I0320 15:58:53.991185 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fvknw" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.004555 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fvknw"] Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.016559 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74bcc47849-2r2xb" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.162588 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db41-account-create-update-x7l2w"] Mar 20 15:58:54 crc kubenswrapper[4730]: E0320 15:58:54.162916 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5e88dae-c3fd-456c-92c6-3bc143b5a399" containerName="dnsmasq-dns" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.162935 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5e88dae-c3fd-456c-92c6-3bc143b5a399" containerName="dnsmasq-dns" Mar 20 15:58:54 crc kubenswrapper[4730]: E0320 15:58:54.162969 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5e88dae-c3fd-456c-92c6-3bc143b5a399" containerName="init" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.162976 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5e88dae-c3fd-456c-92c6-3bc143b5a399" containerName="init" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.163117 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5e88dae-c3fd-456c-92c6-3bc143b5a399" containerName="dnsmasq-dns" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.163674 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db41-account-create-update-x7l2w" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.166060 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.183239 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5e88dae-c3fd-456c-92c6-3bc143b5a399-config\") pod \"a5e88dae-c3fd-456c-92c6-3bc143b5a399\" (UID: \"a5e88dae-c3fd-456c-92c6-3bc143b5a399\") " Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.183782 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db41-account-create-update-x7l2w"] Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.183926 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5e88dae-c3fd-456c-92c6-3bc143b5a399-dns-svc\") pod \"a5e88dae-c3fd-456c-92c6-3bc143b5a399\" (UID: \"a5e88dae-c3fd-456c-92c6-3bc143b5a399\") " Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.184276 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d94g\" (UniqueName: \"kubernetes.io/projected/a5e88dae-c3fd-456c-92c6-3bc143b5a399-kube-api-access-7d94g\") pod \"a5e88dae-c3fd-456c-92c6-3bc143b5a399\" (UID: \"a5e88dae-c3fd-456c-92c6-3bc143b5a399\") " Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.184596 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfwtq\" (UniqueName: \"kubernetes.io/projected/ef40906b-a3dc-45b8-8bde-dd06eaaef85c-kube-api-access-vfwtq\") pod \"keystone-db-create-fvknw\" (UID: \"ef40906b-a3dc-45b8-8bde-dd06eaaef85c\") " pod="openstack/keystone-db-create-fvknw" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.184710 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef40906b-a3dc-45b8-8bde-dd06eaaef85c-operator-scripts\") pod \"keystone-db-create-fvknw\" (UID: \"ef40906b-a3dc-45b8-8bde-dd06eaaef85c\") " pod="openstack/keystone-db-create-fvknw" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.217432 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5e88dae-c3fd-456c-92c6-3bc143b5a399-kube-api-access-7d94g" (OuterVolumeSpecName: "kube-api-access-7d94g") pod "a5e88dae-c3fd-456c-92c6-3bc143b5a399" (UID: "a5e88dae-c3fd-456c-92c6-3bc143b5a399"). InnerVolumeSpecName "kube-api-access-7d94g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.255915 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-x4h5x"] Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.257621 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x4h5x" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.264836 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5e88dae-c3fd-456c-92c6-3bc143b5a399-config" (OuterVolumeSpecName: "config") pod "a5e88dae-c3fd-456c-92c6-3bc143b5a399" (UID: "a5e88dae-c3fd-456c-92c6-3bc143b5a399"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.274340 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-x4h5x"] Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.285956 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c40f368e-f905-465b-9af0-b0ecb753de79-operator-scripts\") pod \"keystone-db41-account-create-update-x7l2w\" (UID: \"c40f368e-f905-465b-9af0-b0ecb753de79\") " pod="openstack/keystone-db41-account-create-update-x7l2w" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.286215 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22s8r\" (UniqueName: \"kubernetes.io/projected/c40f368e-f905-465b-9af0-b0ecb753de79-kube-api-access-22s8r\") pod \"keystone-db41-account-create-update-x7l2w\" (UID: \"c40f368e-f905-465b-9af0-b0ecb753de79\") " pod="openstack/keystone-db41-account-create-update-x7l2w" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.286284 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfwtq\" (UniqueName: \"kubernetes.io/projected/ef40906b-a3dc-45b8-8bde-dd06eaaef85c-kube-api-access-vfwtq\") pod \"keystone-db-create-fvknw\" (UID: \"ef40906b-a3dc-45b8-8bde-dd06eaaef85c\") " pod="openstack/keystone-db-create-fvknw" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.286353 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef40906b-a3dc-45b8-8bde-dd06eaaef85c-operator-scripts\") pod \"keystone-db-create-fvknw\" (UID: \"ef40906b-a3dc-45b8-8bde-dd06eaaef85c\") " pod="openstack/keystone-db-create-fvknw" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.286652 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5e88dae-c3fd-456c-92c6-3bc143b5a399-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.286669 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d94g\" (UniqueName: \"kubernetes.io/projected/a5e88dae-c3fd-456c-92c6-3bc143b5a399-kube-api-access-7d94g\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.287219 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef40906b-a3dc-45b8-8bde-dd06eaaef85c-operator-scripts\") pod \"keystone-db-create-fvknw\" (UID: \"ef40906b-a3dc-45b8-8bde-dd06eaaef85c\") " pod="openstack/keystone-db-create-fvknw" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.288211 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5e88dae-c3fd-456c-92c6-3bc143b5a399-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a5e88dae-c3fd-456c-92c6-3bc143b5a399" (UID: "a5e88dae-c3fd-456c-92c6-3bc143b5a399"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.302767 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfwtq\" (UniqueName: \"kubernetes.io/projected/ef40906b-a3dc-45b8-8bde-dd06eaaef85c-kube-api-access-vfwtq\") pod \"keystone-db-create-fvknw\" (UID: \"ef40906b-a3dc-45b8-8bde-dd06eaaef85c\") " pod="openstack/keystone-db-create-fvknw" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.308474 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-bjqvh" event={"ID":"17c0870c-17e5-4bd4-91b1-a8df134a4fbd","Type":"ContainerStarted","Data":"30a5fc8a5ea71396f4de5cb5ef85143858b4f15e175e2aea0d88617f137cddad"} Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.308532 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-bjqvh" event={"ID":"17c0870c-17e5-4bd4-91b1-a8df134a4fbd","Type":"ContainerStarted","Data":"758d7b72caef7e5c055044d07ddf94f810e0aec3dcd8ba6e54797d6869ccee1f"} Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.310988 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fvknw" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.315281 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74bcc47849-2r2xb" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.315275 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74bcc47849-2r2xb" event={"ID":"a5e88dae-c3fd-456c-92c6-3bc143b5a399","Type":"ContainerDied","Data":"179fd0dd8f56e799ace1a1c345f43d1901fddb9fcf8b79a809a5d214fa5edf4a"} Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.315430 4730 scope.go:117] "RemoveContainer" containerID="15b7107144e36c5c008efc65f73b4c3e5a1b51a4b8a473402aba592b6c7329e1" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.318727 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e285-account-create-update-6wk66" event={"ID":"16da1663-821b-4e05-95f6-df67e9fac962","Type":"ContainerStarted","Data":"67c549d0aa6a1c0db14f97c3aff414699de48b304aa4c5c416c420aae8bc31a7"} Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.318782 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e285-account-create-update-6wk66" event={"ID":"16da1663-821b-4e05-95f6-df67e9fac962","Type":"ContainerStarted","Data":"8583b53c13d80cd0982656c0b75867dafc594f16bd85e802824264387cf98f7a"} Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.330534 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-bjqvh" podStartSLOduration=2.33050688 podStartE2EDuration="2.33050688s" podCreationTimestamp="2026-03-20 15:58:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:58:54.32417628 +0000 UTC m=+1193.537547649" watchObservedRunningTime="2026-03-20 15:58:54.33050688 +0000 UTC m=+1193.543878249" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.358674 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-e285-account-create-update-6wk66" podStartSLOduration=2.358650989 podStartE2EDuration="2.358650989s" podCreationTimestamp="2026-03-20 15:58:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:58:54.347703558 +0000 UTC m=+1193.561074927" watchObservedRunningTime="2026-03-20 15:58:54.358650989 +0000 UTC m=+1193.572022358" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.386375 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c5a1-account-create-update-hfdmn"] Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.388062 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c5a1-account-create-update-hfdmn" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.393961 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c40f368e-f905-465b-9af0-b0ecb753de79-operator-scripts\") pod \"keystone-db41-account-create-update-x7l2w\" (UID: \"c40f368e-f905-465b-9af0-b0ecb753de79\") " pod="openstack/keystone-db41-account-create-update-x7l2w" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.394374 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22s8r\" (UniqueName: \"kubernetes.io/projected/c40f368e-f905-465b-9af0-b0ecb753de79-kube-api-access-22s8r\") pod \"keystone-db41-account-create-update-x7l2w\" (UID: \"c40f368e-f905-465b-9af0-b0ecb753de79\") " pod="openstack/keystone-db41-account-create-update-x7l2w" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.394464 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3198c781-92f7-40f1-9b6e-ed5310febe0b-operator-scripts\") pod \"placement-db-create-x4h5x\" (UID: \"3198c781-92f7-40f1-9b6e-ed5310febe0b\") " pod="openstack/placement-db-create-x4h5x" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.394566 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-698g8\" (UniqueName: \"kubernetes.io/projected/3198c781-92f7-40f1-9b6e-ed5310febe0b-kube-api-access-698g8\") pod \"placement-db-create-x4h5x\" (UID: \"3198c781-92f7-40f1-9b6e-ed5310febe0b\") " pod="openstack/placement-db-create-x4h5x" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.394782 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.394852 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5e88dae-c3fd-456c-92c6-3bc143b5a399-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.395882 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74bcc47849-2r2xb"] Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.397002 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c40f368e-f905-465b-9af0-b0ecb753de79-operator-scripts\") pod \"keystone-db41-account-create-update-x7l2w\" (UID: \"c40f368e-f905-465b-9af0-b0ecb753de79\") " pod="openstack/keystone-db41-account-create-update-x7l2w" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.399991 4730 scope.go:117] "RemoveContainer" containerID="9eec9013a24b2740a1eda4c33bad2a0fe15b131b43355b27b55b42be661249e4" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.414563 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c5a1-account-create-update-hfdmn"] Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.415532 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22s8r\" (UniqueName: \"kubernetes.io/projected/c40f368e-f905-465b-9af0-b0ecb753de79-kube-api-access-22s8r\") pod \"keystone-db41-account-create-update-x7l2w\" (UID: \"c40f368e-f905-465b-9af0-b0ecb753de79\") " pod="openstack/keystone-db41-account-create-update-x7l2w" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.427514 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74bcc47849-2r2xb"] Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.496200 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a532566c-ab86-4984-9212-1e48605d192b-operator-scripts\") pod \"placement-c5a1-account-create-update-hfdmn\" (UID: \"a532566c-ab86-4984-9212-1e48605d192b\") " pod="openstack/placement-c5a1-account-create-update-hfdmn" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.496290 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3198c781-92f7-40f1-9b6e-ed5310febe0b-operator-scripts\") pod \"placement-db-create-x4h5x\" (UID: \"3198c781-92f7-40f1-9b6e-ed5310febe0b\") " pod="openstack/placement-db-create-x4h5x" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.496364 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-698g8\" (UniqueName: \"kubernetes.io/projected/3198c781-92f7-40f1-9b6e-ed5310febe0b-kube-api-access-698g8\") pod \"placement-db-create-x4h5x\" (UID: \"3198c781-92f7-40f1-9b6e-ed5310febe0b\") " pod="openstack/placement-db-create-x4h5x" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.496429 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9qkh\" (UniqueName: \"kubernetes.io/projected/a532566c-ab86-4984-9212-1e48605d192b-kube-api-access-w9qkh\") pod \"placement-c5a1-account-create-update-hfdmn\" (UID: \"a532566c-ab86-4984-9212-1e48605d192b\") " pod="openstack/placement-c5a1-account-create-update-hfdmn" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.498044 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3198c781-92f7-40f1-9b6e-ed5310febe0b-operator-scripts\") pod \"placement-db-create-x4h5x\" (UID: \"3198c781-92f7-40f1-9b6e-ed5310febe0b\") " pod="openstack/placement-db-create-x4h5x" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.532922 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-698g8\" (UniqueName: \"kubernetes.io/projected/3198c781-92f7-40f1-9b6e-ed5310febe0b-kube-api-access-698g8\") pod \"placement-db-create-x4h5x\" (UID: \"3198c781-92f7-40f1-9b6e-ed5310febe0b\") " pod="openstack/placement-db-create-x4h5x" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.597599 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9qkh\" (UniqueName: \"kubernetes.io/projected/a532566c-ab86-4984-9212-1e48605d192b-kube-api-access-w9qkh\") pod \"placement-c5a1-account-create-update-hfdmn\" (UID: \"a532566c-ab86-4984-9212-1e48605d192b\") " pod="openstack/placement-c5a1-account-create-update-hfdmn" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.597722 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a532566c-ab86-4984-9212-1e48605d192b-operator-scripts\") pod \"placement-c5a1-account-create-update-hfdmn\" (UID: \"a532566c-ab86-4984-9212-1e48605d192b\") " pod="openstack/placement-c5a1-account-create-update-hfdmn" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.598616 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a532566c-ab86-4984-9212-1e48605d192b-operator-scripts\") pod \"placement-c5a1-account-create-update-hfdmn\" (UID: \"a532566c-ab86-4984-9212-1e48605d192b\") " pod="openstack/placement-c5a1-account-create-update-hfdmn" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.604991 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db41-account-create-update-x7l2w" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.611024 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x4h5x" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.617378 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9qkh\" (UniqueName: \"kubernetes.io/projected/a532566c-ab86-4984-9212-1e48605d192b-kube-api-access-w9qkh\") pod \"placement-c5a1-account-create-update-hfdmn\" (UID: \"a532566c-ab86-4984-9212-1e48605d192b\") " pod="openstack/placement-c5a1-account-create-update-hfdmn" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.745780 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c5a1-account-create-update-hfdmn" Mar 20 15:58:54 crc kubenswrapper[4730]: I0320 15:58:54.891781 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fvknw"] Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.113070 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-x4h5x"] Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.122060 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db41-account-create-update-x7l2w"] Mar 20 15:58:55 crc kubenswrapper[4730]: W0320 15:58:55.129367 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc40f368e_f905_465b_9af0_b0ecb753de79.slice/crio-085c52d92c419cf64b41d1ae79845132baa487a0824496f899efa452067532d7 WatchSource:0}: Error finding container 085c52d92c419cf64b41d1ae79845132baa487a0824496f899efa452067532d7: Status 404 returned error can't find the container with id 085c52d92c419cf64b41d1ae79845132baa487a0824496f899efa452067532d7 Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.284184 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c5a1-account-create-update-hfdmn"] Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.321104 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6868c44cb9-4zxm5"] Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.322481 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6868c44cb9-4zxm5" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.374419 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6868c44cb9-4zxm5"] Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.392459 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-n9vdf"] Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.393495 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-n9vdf" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.409272 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db41-account-create-update-x7l2w" event={"ID":"c40f368e-f905-465b-9af0-b0ecb753de79","Type":"ContainerStarted","Data":"085c52d92c419cf64b41d1ae79845132baa487a0824496f899efa452067532d7"} Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.411987 4730 generic.go:334] "Generic (PLEG): container finished" podID="16da1663-821b-4e05-95f6-df67e9fac962" containerID="67c549d0aa6a1c0db14f97c3aff414699de48b304aa4c5c416c420aae8bc31a7" exitCode=0 Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.412033 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e285-account-create-update-6wk66" event={"ID":"16da1663-821b-4e05-95f6-df67e9fac962","Type":"ContainerDied","Data":"67c549d0aa6a1c0db14f97c3aff414699de48b304aa4c5c416c420aae8bc31a7"} Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.413405 4730 generic.go:334] "Generic (PLEG): container finished" podID="35efb2c2-6521-4f6f-a350-a4dc537ecaf8" containerID="775e72f867e9b2d92cfca2b734aa3db0f978ba6095c698e181792a7a0058d4cd" exitCode=0 Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.413440 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cdd7f" event={"ID":"35efb2c2-6521-4f6f-a350-a4dc537ecaf8","Type":"ContainerDied","Data":"775e72f867e9b2d92cfca2b734aa3db0f978ba6095c698e181792a7a0058d4cd"} Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.417040 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x4h5x" event={"ID":"3198c781-92f7-40f1-9b6e-ed5310febe0b","Type":"ContainerStarted","Data":"848e9fa2112624c8fb8c6bec3d6893fc6a8dd666fb6f1795ec93e5b1d6b5bea5"} Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.422688 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3806a27b-4a0f-439b-8660-d9ccd4bb0618-config\") pod \"dnsmasq-dns-6868c44cb9-4zxm5\" (UID: \"3806a27b-4a0f-439b-8660-d9ccd4bb0618\") " pod="openstack/dnsmasq-dns-6868c44cb9-4zxm5" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.422847 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt2hc\" (UniqueName: \"kubernetes.io/projected/3806a27b-4a0f-439b-8660-d9ccd4bb0618-kube-api-access-lt2hc\") pod \"dnsmasq-dns-6868c44cb9-4zxm5\" (UID: \"3806a27b-4a0f-439b-8660-d9ccd4bb0618\") " pod="openstack/dnsmasq-dns-6868c44cb9-4zxm5" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.422902 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3806a27b-4a0f-439b-8660-d9ccd4bb0618-ovsdbserver-sb\") pod \"dnsmasq-dns-6868c44cb9-4zxm5\" (UID: \"3806a27b-4a0f-439b-8660-d9ccd4bb0618\") " pod="openstack/dnsmasq-dns-6868c44cb9-4zxm5" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.422991 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3806a27b-4a0f-439b-8660-d9ccd4bb0618-ovsdbserver-nb\") pod \"dnsmasq-dns-6868c44cb9-4zxm5\" (UID: \"3806a27b-4a0f-439b-8660-d9ccd4bb0618\") " pod="openstack/dnsmasq-dns-6868c44cb9-4zxm5" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.423034 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3806a27b-4a0f-439b-8660-d9ccd4bb0618-dns-svc\") pod \"dnsmasq-dns-6868c44cb9-4zxm5\" (UID: \"3806a27b-4a0f-439b-8660-d9ccd4bb0618\") " pod="openstack/dnsmasq-dns-6868c44cb9-4zxm5" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.426311 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdd3845a-3723-438f-aa58-606451baed6c","Type":"ContainerStarted","Data":"ccc4d976b4160ab2263002a763830d5f5f68919c64d310c4f41b79be9631a6ea"} Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.437351 4730 generic.go:334] "Generic (PLEG): container finished" podID="17c0870c-17e5-4bd4-91b1-a8df134a4fbd" containerID="30a5fc8a5ea71396f4de5cb5ef85143858b4f15e175e2aea0d88617f137cddad" exitCode=0 Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.437661 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-bjqvh" event={"ID":"17c0870c-17e5-4bd4-91b1-a8df134a4fbd","Type":"ContainerDied","Data":"30a5fc8a5ea71396f4de5cb5ef85143858b4f15e175e2aea0d88617f137cddad"} Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.457974 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fvknw" event={"ID":"ef40906b-a3dc-45b8-8bde-dd06eaaef85c","Type":"ContainerStarted","Data":"58cffe0b249055c3d576f3ea017f6ee1185d299a1b49ba134b1ea8fcf81d53bd"} Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.458021 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fvknw" event={"ID":"ef40906b-a3dc-45b8-8bde-dd06eaaef85c","Type":"ContainerStarted","Data":"53640fa059a665b1d15935fd34ffaf73050cf1ca9fa0e3068b812699863c68d2"} Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.472876 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-n9vdf"] Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.525923 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt2hc\" (UniqueName: \"kubernetes.io/projected/3806a27b-4a0f-439b-8660-d9ccd4bb0618-kube-api-access-lt2hc\") pod \"dnsmasq-dns-6868c44cb9-4zxm5\" (UID: \"3806a27b-4a0f-439b-8660-d9ccd4bb0618\") " pod="openstack/dnsmasq-dns-6868c44cb9-4zxm5" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.525990 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3806a27b-4a0f-439b-8660-d9ccd4bb0618-ovsdbserver-sb\") pod \"dnsmasq-dns-6868c44cb9-4zxm5\" (UID: \"3806a27b-4a0f-439b-8660-d9ccd4bb0618\") " pod="openstack/dnsmasq-dns-6868c44cb9-4zxm5" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.526038 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5hhb\" (UniqueName: \"kubernetes.io/projected/c7b436cd-ff29-4a9f-9e58-4c8760b1e012-kube-api-access-v5hhb\") pod \"watcher-db-create-n9vdf\" (UID: \"c7b436cd-ff29-4a9f-9e58-4c8760b1e012\") " pod="openstack/watcher-db-create-n9vdf" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.526085 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3806a27b-4a0f-439b-8660-d9ccd4bb0618-ovsdbserver-nb\") pod \"dnsmasq-dns-6868c44cb9-4zxm5\" (UID: \"3806a27b-4a0f-439b-8660-d9ccd4bb0618\") " pod="openstack/dnsmasq-dns-6868c44cb9-4zxm5" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.526111 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7b436cd-ff29-4a9f-9e58-4c8760b1e012-operator-scripts\") pod \"watcher-db-create-n9vdf\" (UID: \"c7b436cd-ff29-4a9f-9e58-4c8760b1e012\") " pod="openstack/watcher-db-create-n9vdf" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.526136 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3806a27b-4a0f-439b-8660-d9ccd4bb0618-dns-svc\") pod \"dnsmasq-dns-6868c44cb9-4zxm5\" (UID: \"3806a27b-4a0f-439b-8660-d9ccd4bb0618\") " pod="openstack/dnsmasq-dns-6868c44cb9-4zxm5" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.526185 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3806a27b-4a0f-439b-8660-d9ccd4bb0618-config\") pod \"dnsmasq-dns-6868c44cb9-4zxm5\" (UID: \"3806a27b-4a0f-439b-8660-d9ccd4bb0618\") " pod="openstack/dnsmasq-dns-6868c44cb9-4zxm5" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.527069 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3806a27b-4a0f-439b-8660-d9ccd4bb0618-config\") pod \"dnsmasq-dns-6868c44cb9-4zxm5\" (UID: \"3806a27b-4a0f-439b-8660-d9ccd4bb0618\") " pod="openstack/dnsmasq-dns-6868c44cb9-4zxm5" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.528432 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3806a27b-4a0f-439b-8660-d9ccd4bb0618-ovsdbserver-sb\") pod \"dnsmasq-dns-6868c44cb9-4zxm5\" (UID: \"3806a27b-4a0f-439b-8660-d9ccd4bb0618\") " pod="openstack/dnsmasq-dns-6868c44cb9-4zxm5" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.528931 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3806a27b-4a0f-439b-8660-d9ccd4bb0618-ovsdbserver-nb\") pod \"dnsmasq-dns-6868c44cb9-4zxm5\" (UID: \"3806a27b-4a0f-439b-8660-d9ccd4bb0618\") " pod="openstack/dnsmasq-dns-6868c44cb9-4zxm5" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.528967 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3806a27b-4a0f-439b-8660-d9ccd4bb0618-dns-svc\") pod \"dnsmasq-dns-6868c44cb9-4zxm5\" (UID: \"3806a27b-4a0f-439b-8660-d9ccd4bb0618\") " pod="openstack/dnsmasq-dns-6868c44cb9-4zxm5" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.558639 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5e88dae-c3fd-456c-92c6-3bc143b5a399" path="/var/lib/kubelet/pods/a5e88dae-c3fd-456c-92c6-3bc143b5a399/volumes" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.571399 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt2hc\" (UniqueName: \"kubernetes.io/projected/3806a27b-4a0f-439b-8660-d9ccd4bb0618-kube-api-access-lt2hc\") pod \"dnsmasq-dns-6868c44cb9-4zxm5\" (UID: \"3806a27b-4a0f-439b-8660-d9ccd4bb0618\") " pod="openstack/dnsmasq-dns-6868c44cb9-4zxm5" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.587534 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-87f2-account-create-update-lblc4"] Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.589002 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-87f2-account-create-update-lblc4" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.592993 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.635999 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7b436cd-ff29-4a9f-9e58-4c8760b1e012-operator-scripts\") pod \"watcher-db-create-n9vdf\" (UID: \"c7b436cd-ff29-4a9f-9e58-4c8760b1e012\") " pod="openstack/watcher-db-create-n9vdf" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.636164 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5hhb\" (UniqueName: \"kubernetes.io/projected/c7b436cd-ff29-4a9f-9e58-4c8760b1e012-kube-api-access-v5hhb\") pod \"watcher-db-create-n9vdf\" (UID: \"c7b436cd-ff29-4a9f-9e58-4c8760b1e012\") " pod="openstack/watcher-db-create-n9vdf" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.638000 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7b436cd-ff29-4a9f-9e58-4c8760b1e012-operator-scripts\") pod \"watcher-db-create-n9vdf\" (UID: \"c7b436cd-ff29-4a9f-9e58-4c8760b1e012\") " pod="openstack/watcher-db-create-n9vdf" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.641352 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-87f2-account-create-update-lblc4"] Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.689997 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5hhb\" (UniqueName: \"kubernetes.io/projected/c7b436cd-ff29-4a9f-9e58-4c8760b1e012-kube-api-access-v5hhb\") pod \"watcher-db-create-n9vdf\" (UID: \"c7b436cd-ff29-4a9f-9e58-4c8760b1e012\") " pod="openstack/watcher-db-create-n9vdf" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.708653 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6868c44cb9-4zxm5" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.734532 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-fvknw" podStartSLOduration=2.7345098009999997 podStartE2EDuration="2.734509801s" podCreationTimestamp="2026-03-20 15:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:58:55.581433491 +0000 UTC m=+1194.794804860" watchObservedRunningTime="2026-03-20 15:58:55.734509801 +0000 UTC m=+1194.947881170" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.745662 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhx74\" (UniqueName: \"kubernetes.io/projected/a132fe19-9294-49c6-9b1e-fe3eed7f4bae-kube-api-access-qhx74\") pod \"watcher-87f2-account-create-update-lblc4\" (UID: \"a132fe19-9294-49c6-9b1e-fe3eed7f4bae\") " pod="openstack/watcher-87f2-account-create-update-lblc4" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.745708 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a132fe19-9294-49c6-9b1e-fe3eed7f4bae-operator-scripts\") pod \"watcher-87f2-account-create-update-lblc4\" (UID: \"a132fe19-9294-49c6-9b1e-fe3eed7f4bae\") " pod="openstack/watcher-87f2-account-create-update-lblc4" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.752616 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-n9vdf" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.849219 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhx74\" (UniqueName: \"kubernetes.io/projected/a132fe19-9294-49c6-9b1e-fe3eed7f4bae-kube-api-access-qhx74\") pod \"watcher-87f2-account-create-update-lblc4\" (UID: \"a132fe19-9294-49c6-9b1e-fe3eed7f4bae\") " pod="openstack/watcher-87f2-account-create-update-lblc4" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.849267 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a132fe19-9294-49c6-9b1e-fe3eed7f4bae-operator-scripts\") pod \"watcher-87f2-account-create-update-lblc4\" (UID: \"a132fe19-9294-49c6-9b1e-fe3eed7f4bae\") " pod="openstack/watcher-87f2-account-create-update-lblc4" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.850023 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a132fe19-9294-49c6-9b1e-fe3eed7f4bae-operator-scripts\") pod \"watcher-87f2-account-create-update-lblc4\" (UID: \"a132fe19-9294-49c6-9b1e-fe3eed7f4bae\") " pod="openstack/watcher-87f2-account-create-update-lblc4" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.878341 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhx74\" (UniqueName: \"kubernetes.io/projected/a132fe19-9294-49c6-9b1e-fe3eed7f4bae-kube-api-access-qhx74\") pod \"watcher-87f2-account-create-update-lblc4\" (UID: \"a132fe19-9294-49c6-9b1e-fe3eed7f4bae\") " pod="openstack/watcher-87f2-account-create-update-lblc4" Mar 20 15:58:55 crc kubenswrapper[4730]: I0320 15:58:55.980483 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-87f2-account-create-update-lblc4" Mar 20 15:58:56 crc kubenswrapper[4730]: E0320 15:58:56.122672 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef40906b_a3dc_45b8_8bde_dd06eaaef85c.slice/crio-conmon-58cffe0b249055c3d576f3ea017f6ee1185d299a1b49ba134b1ea8fcf81d53bd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef40906b_a3dc_45b8_8bde_dd06eaaef85c.slice/crio-58cffe0b249055c3d576f3ea017f6ee1185d299a1b49ba134b1ea8fcf81d53bd.scope\": RecentStats: unable to find data in memory cache]" Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.288982 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6868c44cb9-4zxm5"] Mar 20 15:58:56 crc kubenswrapper[4730]: W0320 15:58:56.303691 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3806a27b_4a0f_439b_8660_d9ccd4bb0618.slice/crio-1d80c1a200a2b02176c3920aea1d4ac3bcd064cd7f985eb732053050a5d3edc7 WatchSource:0}: Error finding container 1d80c1a200a2b02176c3920aea1d4ac3bcd064cd7f985eb732053050a5d3edc7: Status 404 returned error can't find the container with id 1d80c1a200a2b02176c3920aea1d4ac3bcd064cd7f985eb732053050a5d3edc7 Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.391989 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-n9vdf"] Mar 20 15:58:56 crc kubenswrapper[4730]: W0320 15:58:56.401108 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7b436cd_ff29_4a9f_9e58_4c8760b1e012.slice/crio-46623510ee8353ece4203ec113c421119597c40cda4c774ba80d9347a1c73413 WatchSource:0}: Error finding container 46623510ee8353ece4203ec113c421119597c40cda4c774ba80d9347a1c73413: Status 404 returned error can't find the container with id 46623510ee8353ece4203ec113c421119597c40cda4c774ba80d9347a1c73413 Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.470029 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6868c44cb9-4zxm5" event={"ID":"3806a27b-4a0f-439b-8660-d9ccd4bb0618","Type":"ContainerStarted","Data":"1d80c1a200a2b02176c3920aea1d4ac3bcd064cd7f985eb732053050a5d3edc7"} Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.473542 4730 generic.go:334] "Generic (PLEG): container finished" podID="a532566c-ab86-4984-9212-1e48605d192b" containerID="a8ebba1aa3aefe2f2a84695ac23d42f8c9788cfc46f63bc2e9dead733c7274ec" exitCode=0 Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.473641 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c5a1-account-create-update-hfdmn" event={"ID":"a532566c-ab86-4984-9212-1e48605d192b","Type":"ContainerDied","Data":"a8ebba1aa3aefe2f2a84695ac23d42f8c9788cfc46f63bc2e9dead733c7274ec"} Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.473719 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c5a1-account-create-update-hfdmn" event={"ID":"a532566c-ab86-4984-9212-1e48605d192b","Type":"ContainerStarted","Data":"292856fca4df62efa63cfc164ccbb4f784eb6e19218a746589e6b7b1c3a0dd78"} Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.476184 4730 generic.go:334] "Generic (PLEG): container finished" podID="3198c781-92f7-40f1-9b6e-ed5310febe0b" containerID="1742d4d5f625e20265689269ef8d4a8b9f9546ddd3978d31dffe002e4353d662" exitCode=0 Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.476302 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x4h5x" event={"ID":"3198c781-92f7-40f1-9b6e-ed5310febe0b","Type":"ContainerDied","Data":"1742d4d5f625e20265689269ef8d4a8b9f9546ddd3978d31dffe002e4353d662"} Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.482346 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cdd7f" event={"ID":"35efb2c2-6521-4f6f-a350-a4dc537ecaf8","Type":"ContainerStarted","Data":"738343ef117f1c9c96aec25170eaeff222767a5338bf169b41385a96e5114518"} Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.490713 4730 generic.go:334] "Generic (PLEG): container finished" podID="ef40906b-a3dc-45b8-8bde-dd06eaaef85c" containerID="58cffe0b249055c3d576f3ea017f6ee1185d299a1b49ba134b1ea8fcf81d53bd" exitCode=0 Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.490749 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fvknw" event={"ID":"ef40906b-a3dc-45b8-8bde-dd06eaaef85c","Type":"ContainerDied","Data":"58cffe0b249055c3d576f3ea017f6ee1185d299a1b49ba134b1ea8fcf81d53bd"} Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.493332 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-n9vdf" event={"ID":"c7b436cd-ff29-4a9f-9e58-4c8760b1e012","Type":"ContainerStarted","Data":"46623510ee8353ece4203ec113c421119597c40cda4c774ba80d9347a1c73413"} Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.496397 4730 generic.go:334] "Generic (PLEG): container finished" podID="c40f368e-f905-465b-9af0-b0ecb753de79" containerID="b503fc415bca6f276d3faa0fabe6ea4e17e93d2815b320d70f62ffa635dc90fc" exitCode=0 Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.496496 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db41-account-create-update-x7l2w" event={"ID":"c40f368e-f905-465b-9af0-b0ecb753de79","Type":"ContainerDied","Data":"b503fc415bca6f276d3faa0fabe6ea4e17e93d2815b320d70f62ffa635dc90fc"} Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.586261 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-87f2-account-create-update-lblc4"] Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.717732 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.745853 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.751045 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.751214 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.751343 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.751462 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-6gzpt" Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.758925 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.782855 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2c9def6e-27a0-4543-8d3c-07b3e4005b33-lock\") pod \"swift-storage-0\" (UID: \"2c9def6e-27a0-4543-8d3c-07b3e4005b33\") " pod="openstack/swift-storage-0" Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.782991 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c9def6e-27a0-4543-8d3c-07b3e4005b33-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"2c9def6e-27a0-4543-8d3c-07b3e4005b33\") " pod="openstack/swift-storage-0" Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.783021 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktk6f\" (UniqueName: \"kubernetes.io/projected/2c9def6e-27a0-4543-8d3c-07b3e4005b33-kube-api-access-ktk6f\") pod \"swift-storage-0\" (UID: \"2c9def6e-27a0-4543-8d3c-07b3e4005b33\") " pod="openstack/swift-storage-0" Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.783054 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"2c9def6e-27a0-4543-8d3c-07b3e4005b33\") " pod="openstack/swift-storage-0" Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.783097 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2c9def6e-27a0-4543-8d3c-07b3e4005b33-cache\") pod \"swift-storage-0\" (UID: \"2c9def6e-27a0-4543-8d3c-07b3e4005b33\") " pod="openstack/swift-storage-0" Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.783789 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2c9def6e-27a0-4543-8d3c-07b3e4005b33-etc-swift\") pod \"swift-storage-0\" (UID: \"2c9def6e-27a0-4543-8d3c-07b3e4005b33\") " pod="openstack/swift-storage-0" Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.888223 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2c9def6e-27a0-4543-8d3c-07b3e4005b33-cache\") pod \"swift-storage-0\" (UID: \"2c9def6e-27a0-4543-8d3c-07b3e4005b33\") " pod="openstack/swift-storage-0" Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.888364 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2c9def6e-27a0-4543-8d3c-07b3e4005b33-etc-swift\") pod \"swift-storage-0\" (UID: \"2c9def6e-27a0-4543-8d3c-07b3e4005b33\") " pod="openstack/swift-storage-0" Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.888424 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2c9def6e-27a0-4543-8d3c-07b3e4005b33-lock\") pod \"swift-storage-0\" (UID: \"2c9def6e-27a0-4543-8d3c-07b3e4005b33\") " pod="openstack/swift-storage-0" Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.888527 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c9def6e-27a0-4543-8d3c-07b3e4005b33-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"2c9def6e-27a0-4543-8d3c-07b3e4005b33\") " pod="openstack/swift-storage-0" Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.888552 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktk6f\" (UniqueName: \"kubernetes.io/projected/2c9def6e-27a0-4543-8d3c-07b3e4005b33-kube-api-access-ktk6f\") pod \"swift-storage-0\" (UID: \"2c9def6e-27a0-4543-8d3c-07b3e4005b33\") " pod="openstack/swift-storage-0" Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.888596 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"2c9def6e-27a0-4543-8d3c-07b3e4005b33\") " pod="openstack/swift-storage-0" Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.889126 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"2c9def6e-27a0-4543-8d3c-07b3e4005b33\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.893733 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2c9def6e-27a0-4543-8d3c-07b3e4005b33-lock\") pod \"swift-storage-0\" (UID: \"2c9def6e-27a0-4543-8d3c-07b3e4005b33\") " pod="openstack/swift-storage-0" Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.889955 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2c9def6e-27a0-4543-8d3c-07b3e4005b33-cache\") pod \"swift-storage-0\" (UID: \"2c9def6e-27a0-4543-8d3c-07b3e4005b33\") " pod="openstack/swift-storage-0" Mar 20 15:58:56 crc kubenswrapper[4730]: E0320 15:58:56.893879 4730 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 20 15:58:56 crc kubenswrapper[4730]: E0320 15:58:56.893893 4730 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 20 15:58:56 crc kubenswrapper[4730]: E0320 15:58:56.894000 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c9def6e-27a0-4543-8d3c-07b3e4005b33-etc-swift podName:2c9def6e-27a0-4543-8d3c-07b3e4005b33 nodeName:}" failed. No retries permitted until 2026-03-20 15:58:57.393966412 +0000 UTC m=+1196.607337781 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2c9def6e-27a0-4543-8d3c-07b3e4005b33-etc-swift") pod "swift-storage-0" (UID: "2c9def6e-27a0-4543-8d3c-07b3e4005b33") : configmap "swift-ring-files" not found Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.924709 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktk6f\" (UniqueName: \"kubernetes.io/projected/2c9def6e-27a0-4543-8d3c-07b3e4005b33-kube-api-access-ktk6f\") pod \"swift-storage-0\" (UID: \"2c9def6e-27a0-4543-8d3c-07b3e4005b33\") " pod="openstack/swift-storage-0" Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.925039 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c9def6e-27a0-4543-8d3c-07b3e4005b33-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"2c9def6e-27a0-4543-8d3c-07b3e4005b33\") " pod="openstack/swift-storage-0" Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.949701 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"2c9def6e-27a0-4543-8d3c-07b3e4005b33\") " pod="openstack/swift-storage-0" Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.976499 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e285-account-create-update-6wk66" Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.979500 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bjqvh" Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.990646 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfcdj\" (UniqueName: \"kubernetes.io/projected/16da1663-821b-4e05-95f6-df67e9fac962-kube-api-access-xfcdj\") pod \"16da1663-821b-4e05-95f6-df67e9fac962\" (UID: \"16da1663-821b-4e05-95f6-df67e9fac962\") " Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.990828 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16da1663-821b-4e05-95f6-df67e9fac962-operator-scripts\") pod \"16da1663-821b-4e05-95f6-df67e9fac962\" (UID: \"16da1663-821b-4e05-95f6-df67e9fac962\") " Mar 20 15:58:56 crc kubenswrapper[4730]: I0320 15:58:56.993598 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16da1663-821b-4e05-95f6-df67e9fac962-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "16da1663-821b-4e05-95f6-df67e9fac962" (UID: "16da1663-821b-4e05-95f6-df67e9fac962"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.091640 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17c0870c-17e5-4bd4-91b1-a8df134a4fbd-operator-scripts\") pod \"17c0870c-17e5-4bd4-91b1-a8df134a4fbd\" (UID: \"17c0870c-17e5-4bd4-91b1-a8df134a4fbd\") " Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.091760 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj7ws\" (UniqueName: \"kubernetes.io/projected/17c0870c-17e5-4bd4-91b1-a8df134a4fbd-kube-api-access-hj7ws\") pod \"17c0870c-17e5-4bd4-91b1-a8df134a4fbd\" (UID: \"17c0870c-17e5-4bd4-91b1-a8df134a4fbd\") " Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.092043 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16da1663-821b-4e05-95f6-df67e9fac962-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.092431 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17c0870c-17e5-4bd4-91b1-a8df134a4fbd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "17c0870c-17e5-4bd4-91b1-a8df134a4fbd" (UID: "17c0870c-17e5-4bd4-91b1-a8df134a4fbd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.121569 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16da1663-821b-4e05-95f6-df67e9fac962-kube-api-access-xfcdj" (OuterVolumeSpecName: "kube-api-access-xfcdj") pod "16da1663-821b-4e05-95f6-df67e9fac962" (UID: "16da1663-821b-4e05-95f6-df67e9fac962"). InnerVolumeSpecName "kube-api-access-xfcdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.123450 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17c0870c-17e5-4bd4-91b1-a8df134a4fbd-kube-api-access-hj7ws" (OuterVolumeSpecName: "kube-api-access-hj7ws") pod "17c0870c-17e5-4bd4-91b1-a8df134a4fbd" (UID: "17c0870c-17e5-4bd4-91b1-a8df134a4fbd"). InnerVolumeSpecName "kube-api-access-hj7ws". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.196087 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17c0870c-17e5-4bd4-91b1-a8df134a4fbd-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.196442 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfcdj\" (UniqueName: \"kubernetes.io/projected/16da1663-821b-4e05-95f6-df67e9fac962-kube-api-access-xfcdj\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.196465 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj7ws\" (UniqueName: \"kubernetes.io/projected/17c0870c-17e5-4bd4-91b1-a8df134a4fbd-kube-api-access-hj7ws\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.400412 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2c9def6e-27a0-4543-8d3c-07b3e4005b33-etc-swift\") pod \"swift-storage-0\" (UID: \"2c9def6e-27a0-4543-8d3c-07b3e4005b33\") " pod="openstack/swift-storage-0" Mar 20 15:58:57 crc kubenswrapper[4730]: E0320 15:58:57.400715 4730 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 20 15:58:57 crc kubenswrapper[4730]: E0320 15:58:57.400733 4730 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 20 15:58:57 crc kubenswrapper[4730]: E0320 15:58:57.400781 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c9def6e-27a0-4543-8d3c-07b3e4005b33-etc-swift podName:2c9def6e-27a0-4543-8d3c-07b3e4005b33 nodeName:}" failed. No retries permitted until 2026-03-20 15:58:58.400765301 +0000 UTC m=+1197.614136670 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2c9def6e-27a0-4543-8d3c-07b3e4005b33-etc-swift") pod "swift-storage-0" (UID: "2c9def6e-27a0-4543-8d3c-07b3e4005b33") : configmap "swift-ring-files" not found Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.508238 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-bjqvh" event={"ID":"17c0870c-17e5-4bd4-91b1-a8df134a4fbd","Type":"ContainerDied","Data":"758d7b72caef7e5c055044d07ddf94f810e0aec3dcd8ba6e54797d6869ccee1f"} Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.508298 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="758d7b72caef7e5c055044d07ddf94f810e0aec3dcd8ba6e54797d6869ccee1f" Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.508365 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bjqvh" Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.511766 4730 generic.go:334] "Generic (PLEG): container finished" podID="a132fe19-9294-49c6-9b1e-fe3eed7f4bae" containerID="ae717d458b43c41d279b4f17419574a7ba6d139ccd8581e792b75559eb5cba0c" exitCode=0 Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.511829 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-87f2-account-create-update-lblc4" event={"ID":"a132fe19-9294-49c6-9b1e-fe3eed7f4bae","Type":"ContainerDied","Data":"ae717d458b43c41d279b4f17419574a7ba6d139ccd8581e792b75559eb5cba0c"} Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.511857 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-87f2-account-create-update-lblc4" event={"ID":"a132fe19-9294-49c6-9b1e-fe3eed7f4bae","Type":"ContainerStarted","Data":"10d5a54946b4bf571d21b82b12cd1c4555f94ac55a2e76b8dacd79b2d2c7f077"} Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.513908 4730 generic.go:334] "Generic (PLEG): container finished" podID="c7b436cd-ff29-4a9f-9e58-4c8760b1e012" containerID="2857b9eca0093dd961d059808f0936df2d938583bfd861b80e003896a914c165" exitCode=0 Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.513950 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-n9vdf" event={"ID":"c7b436cd-ff29-4a9f-9e58-4c8760b1e012","Type":"ContainerDied","Data":"2857b9eca0093dd961d059808f0936df2d938583bfd861b80e003896a914c165"} Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.520519 4730 generic.go:334] "Generic (PLEG): container finished" podID="3806a27b-4a0f-439b-8660-d9ccd4bb0618" containerID="065e2b37d1a5fb4fd864e13cde37d04579827706198a7e6a7ccea3b914128eb1" exitCode=0 Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.520591 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6868c44cb9-4zxm5" event={"ID":"3806a27b-4a0f-439b-8660-d9ccd4bb0618","Type":"ContainerDied","Data":"065e2b37d1a5fb4fd864e13cde37d04579827706198a7e6a7ccea3b914128eb1"} Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.524219 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e285-account-create-update-6wk66" Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.524261 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e285-account-create-update-6wk66" event={"ID":"16da1663-821b-4e05-95f6-df67e9fac962","Type":"ContainerDied","Data":"8583b53c13d80cd0982656c0b75867dafc594f16bd85e802824264387cf98f7a"} Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.524322 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8583b53c13d80cd0982656c0b75867dafc594f16bd85e802824264387cf98f7a" Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.530083 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cdd7f" event={"ID":"35efb2c2-6521-4f6f-a350-a4dc537ecaf8","Type":"ContainerStarted","Data":"d215dd2e91a84b3112500d4694f1541097820fe0660149159734b31a6dacf3b5"} Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.530653 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-cdd7f" Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.530857 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-cdd7f" Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.564418 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdd3845a-3723-438f-aa58-606451baed6c","Type":"ContainerStarted","Data":"2e07902c18ce4590f197ddee6088f8273a8da0ef0fa191f28dc4116786b4c25f"} Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.641327 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-cdd7f" podStartSLOduration=8.583908846 podStartE2EDuration="39.641306535s" podCreationTimestamp="2026-03-20 15:58:18 +0000 UTC" firstStartedPulling="2026-03-20 15:58:23.196024711 +0000 UTC m=+1162.409396080" lastFinishedPulling="2026-03-20 15:58:54.2534224 +0000 UTC m=+1193.466793769" observedRunningTime="2026-03-20 15:58:57.626086573 +0000 UTC m=+1196.839457942" watchObservedRunningTime="2026-03-20 15:58:57.641306535 +0000 UTC m=+1196.854677905" Mar 20 15:58:57 crc kubenswrapper[4730]: I0320 15:58:57.987853 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x4h5x" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.127866 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-698g8\" (UniqueName: \"kubernetes.io/projected/3198c781-92f7-40f1-9b6e-ed5310febe0b-kube-api-access-698g8\") pod \"3198c781-92f7-40f1-9b6e-ed5310febe0b\" (UID: \"3198c781-92f7-40f1-9b6e-ed5310febe0b\") " Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.128063 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3198c781-92f7-40f1-9b6e-ed5310febe0b-operator-scripts\") pod \"3198c781-92f7-40f1-9b6e-ed5310febe0b\" (UID: \"3198c781-92f7-40f1-9b6e-ed5310febe0b\") " Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.129188 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3198c781-92f7-40f1-9b6e-ed5310febe0b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3198c781-92f7-40f1-9b6e-ed5310febe0b" (UID: "3198c781-92f7-40f1-9b6e-ed5310febe0b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.141414 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3198c781-92f7-40f1-9b6e-ed5310febe0b-kube-api-access-698g8" (OuterVolumeSpecName: "kube-api-access-698g8") pod "3198c781-92f7-40f1-9b6e-ed5310febe0b" (UID: "3198c781-92f7-40f1-9b6e-ed5310febe0b"). InnerVolumeSpecName "kube-api-access-698g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.204660 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-mkvv4"] Mar 20 15:58:58 crc kubenswrapper[4730]: E0320 15:58:58.204957 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3198c781-92f7-40f1-9b6e-ed5310febe0b" containerName="mariadb-database-create" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.204970 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="3198c781-92f7-40f1-9b6e-ed5310febe0b" containerName="mariadb-database-create" Mar 20 15:58:58 crc kubenswrapper[4730]: E0320 15:58:58.204986 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17c0870c-17e5-4bd4-91b1-a8df134a4fbd" containerName="mariadb-database-create" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.204992 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="17c0870c-17e5-4bd4-91b1-a8df134a4fbd" containerName="mariadb-database-create" Mar 20 15:58:58 crc kubenswrapper[4730]: E0320 15:58:58.205005 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16da1663-821b-4e05-95f6-df67e9fac962" containerName="mariadb-account-create-update" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.205011 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="16da1663-821b-4e05-95f6-df67e9fac962" containerName="mariadb-account-create-update" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.205181 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="3198c781-92f7-40f1-9b6e-ed5310febe0b" containerName="mariadb-database-create" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.205194 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="17c0870c-17e5-4bd4-91b1-a8df134a4fbd" containerName="mariadb-database-create" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.205209 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="16da1663-821b-4e05-95f6-df67e9fac962" containerName="mariadb-account-create-update" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.205763 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mkvv4" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.212524 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-b8g88" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.212529 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.220982 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mkvv4"] Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.230558 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3198c781-92f7-40f1-9b6e-ed5310febe0b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.230599 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-698g8\" (UniqueName: \"kubernetes.io/projected/3198c781-92f7-40f1-9b6e-ed5310febe0b-kube-api-access-698g8\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.283773 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fvknw" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.307789 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c5a1-account-create-update-hfdmn" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.313709 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db41-account-create-update-x7l2w" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.332924 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef40906b-a3dc-45b8-8bde-dd06eaaef85c-operator-scripts\") pod \"ef40906b-a3dc-45b8-8bde-dd06eaaef85c\" (UID: \"ef40906b-a3dc-45b8-8bde-dd06eaaef85c\") " Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.333087 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfwtq\" (UniqueName: \"kubernetes.io/projected/ef40906b-a3dc-45b8-8bde-dd06eaaef85c-kube-api-access-vfwtq\") pod \"ef40906b-a3dc-45b8-8bde-dd06eaaef85c\" (UID: \"ef40906b-a3dc-45b8-8bde-dd06eaaef85c\") " Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.333329 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/37dd8777-c196-4db2-af7a-5560a939e02c-db-sync-config-data\") pod \"glance-db-sync-mkvv4\" (UID: \"37dd8777-c196-4db2-af7a-5560a939e02c\") " pod="openstack/glance-db-sync-mkvv4" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.333370 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dd8777-c196-4db2-af7a-5560a939e02c-config-data\") pod \"glance-db-sync-mkvv4\" (UID: \"37dd8777-c196-4db2-af7a-5560a939e02c\") " pod="openstack/glance-db-sync-mkvv4" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.333427 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwpz2\" (UniqueName: \"kubernetes.io/projected/37dd8777-c196-4db2-af7a-5560a939e02c-kube-api-access-wwpz2\") pod \"glance-db-sync-mkvv4\" (UID: \"37dd8777-c196-4db2-af7a-5560a939e02c\") " pod="openstack/glance-db-sync-mkvv4" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.333465 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dd8777-c196-4db2-af7a-5560a939e02c-combined-ca-bundle\") pod \"glance-db-sync-mkvv4\" (UID: \"37dd8777-c196-4db2-af7a-5560a939e02c\") " pod="openstack/glance-db-sync-mkvv4" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.334533 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef40906b-a3dc-45b8-8bde-dd06eaaef85c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ef40906b-a3dc-45b8-8bde-dd06eaaef85c" (UID: "ef40906b-a3dc-45b8-8bde-dd06eaaef85c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.337070 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef40906b-a3dc-45b8-8bde-dd06eaaef85c-kube-api-access-vfwtq" (OuterVolumeSpecName: "kube-api-access-vfwtq") pod "ef40906b-a3dc-45b8-8bde-dd06eaaef85c" (UID: "ef40906b-a3dc-45b8-8bde-dd06eaaef85c"). InnerVolumeSpecName "kube-api-access-vfwtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.434966 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9qkh\" (UniqueName: \"kubernetes.io/projected/a532566c-ab86-4984-9212-1e48605d192b-kube-api-access-w9qkh\") pod \"a532566c-ab86-4984-9212-1e48605d192b\" (UID: \"a532566c-ab86-4984-9212-1e48605d192b\") " Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.435007 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c40f368e-f905-465b-9af0-b0ecb753de79-operator-scripts\") pod \"c40f368e-f905-465b-9af0-b0ecb753de79\" (UID: \"c40f368e-f905-465b-9af0-b0ecb753de79\") " Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.435030 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22s8r\" (UniqueName: \"kubernetes.io/projected/c40f368e-f905-465b-9af0-b0ecb753de79-kube-api-access-22s8r\") pod \"c40f368e-f905-465b-9af0-b0ecb753de79\" (UID: \"c40f368e-f905-465b-9af0-b0ecb753de79\") " Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.435569 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c40f368e-f905-465b-9af0-b0ecb753de79-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c40f368e-f905-465b-9af0-b0ecb753de79" (UID: "c40f368e-f905-465b-9af0-b0ecb753de79"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.436132 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a532566c-ab86-4984-9212-1e48605d192b-operator-scripts\") pod \"a532566c-ab86-4984-9212-1e48605d192b\" (UID: \"a532566c-ab86-4984-9212-1e48605d192b\") " Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.436569 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a532566c-ab86-4984-9212-1e48605d192b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a532566c-ab86-4984-9212-1e48605d192b" (UID: "a532566c-ab86-4984-9212-1e48605d192b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.436734 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/37dd8777-c196-4db2-af7a-5560a939e02c-db-sync-config-data\") pod \"glance-db-sync-mkvv4\" (UID: \"37dd8777-c196-4db2-af7a-5560a939e02c\") " pod="openstack/glance-db-sync-mkvv4" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.437160 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dd8777-c196-4db2-af7a-5560a939e02c-config-data\") pod \"glance-db-sync-mkvv4\" (UID: \"37dd8777-c196-4db2-af7a-5560a939e02c\") " pod="openstack/glance-db-sync-mkvv4" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.437208 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2c9def6e-27a0-4543-8d3c-07b3e4005b33-etc-swift\") pod \"swift-storage-0\" (UID: \"2c9def6e-27a0-4543-8d3c-07b3e4005b33\") " pod="openstack/swift-storage-0" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.437235 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwpz2\" (UniqueName: \"kubernetes.io/projected/37dd8777-c196-4db2-af7a-5560a939e02c-kube-api-access-wwpz2\") pod \"glance-db-sync-mkvv4\" (UID: \"37dd8777-c196-4db2-af7a-5560a939e02c\") " pod="openstack/glance-db-sync-mkvv4" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.437285 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dd8777-c196-4db2-af7a-5560a939e02c-combined-ca-bundle\") pod \"glance-db-sync-mkvv4\" (UID: \"37dd8777-c196-4db2-af7a-5560a939e02c\") " pod="openstack/glance-db-sync-mkvv4" Mar 20 15:58:58 crc kubenswrapper[4730]: E0320 15:58:58.437374 4730 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 20 15:58:58 crc kubenswrapper[4730]: E0320 15:58:58.437404 4730 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 20 15:58:58 crc kubenswrapper[4730]: E0320 15:58:58.437479 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c9def6e-27a0-4543-8d3c-07b3e4005b33-etc-swift podName:2c9def6e-27a0-4543-8d3c-07b3e4005b33 nodeName:}" failed. No retries permitted until 2026-03-20 15:59:00.437463456 +0000 UTC m=+1199.650834825 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2c9def6e-27a0-4543-8d3c-07b3e4005b33-etc-swift") pod "swift-storage-0" (UID: "2c9def6e-27a0-4543-8d3c-07b3e4005b33") : configmap "swift-ring-files" not found Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.437392 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c40f368e-f905-465b-9af0-b0ecb753de79-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.437613 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfwtq\" (UniqueName: \"kubernetes.io/projected/ef40906b-a3dc-45b8-8bde-dd06eaaef85c-kube-api-access-vfwtq\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.437627 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef40906b-a3dc-45b8-8bde-dd06eaaef85c-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.437642 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a532566c-ab86-4984-9212-1e48605d192b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.438516 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a532566c-ab86-4984-9212-1e48605d192b-kube-api-access-w9qkh" (OuterVolumeSpecName: "kube-api-access-w9qkh") pod "a532566c-ab86-4984-9212-1e48605d192b" (UID: "a532566c-ab86-4984-9212-1e48605d192b"). InnerVolumeSpecName "kube-api-access-w9qkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.441018 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dd8777-c196-4db2-af7a-5560a939e02c-config-data\") pod \"glance-db-sync-mkvv4\" (UID: \"37dd8777-c196-4db2-af7a-5560a939e02c\") " pod="openstack/glance-db-sync-mkvv4" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.441112 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c40f368e-f905-465b-9af0-b0ecb753de79-kube-api-access-22s8r" (OuterVolumeSpecName: "kube-api-access-22s8r") pod "c40f368e-f905-465b-9af0-b0ecb753de79" (UID: "c40f368e-f905-465b-9af0-b0ecb753de79"). InnerVolumeSpecName "kube-api-access-22s8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.441494 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dd8777-c196-4db2-af7a-5560a939e02c-combined-ca-bundle\") pod \"glance-db-sync-mkvv4\" (UID: \"37dd8777-c196-4db2-af7a-5560a939e02c\") " pod="openstack/glance-db-sync-mkvv4" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.442983 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/37dd8777-c196-4db2-af7a-5560a939e02c-db-sync-config-data\") pod \"glance-db-sync-mkvv4\" (UID: \"37dd8777-c196-4db2-af7a-5560a939e02c\") " pod="openstack/glance-db-sync-mkvv4" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.453435 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwpz2\" (UniqueName: \"kubernetes.io/projected/37dd8777-c196-4db2-af7a-5560a939e02c-kube-api-access-wwpz2\") pod \"glance-db-sync-mkvv4\" (UID: \"37dd8777-c196-4db2-af7a-5560a939e02c\") " pod="openstack/glance-db-sync-mkvv4" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.539695 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9qkh\" (UniqueName: \"kubernetes.io/projected/a532566c-ab86-4984-9212-1e48605d192b-kube-api-access-w9qkh\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.539752 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22s8r\" (UniqueName: \"kubernetes.io/projected/c40f368e-f905-465b-9af0-b0ecb753de79-kube-api-access-22s8r\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.564387 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x4h5x" event={"ID":"3198c781-92f7-40f1-9b6e-ed5310febe0b","Type":"ContainerDied","Data":"848e9fa2112624c8fb8c6bec3d6893fc6a8dd666fb6f1795ec93e5b1d6b5bea5"} Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.564431 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="848e9fa2112624c8fb8c6bec3d6893fc6a8dd666fb6f1795ec93e5b1d6b5bea5" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.564436 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x4h5x" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.565960 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fvknw" event={"ID":"ef40906b-a3dc-45b8-8bde-dd06eaaef85c","Type":"ContainerDied","Data":"53640fa059a665b1d15935fd34ffaf73050cf1ca9fa0e3068b812699863c68d2"} Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.566000 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53640fa059a665b1d15935fd34ffaf73050cf1ca9fa0e3068b812699863c68d2" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.566049 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fvknw" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.568722 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db41-account-create-update-x7l2w" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.568726 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db41-account-create-update-x7l2w" event={"ID":"c40f368e-f905-465b-9af0-b0ecb753de79","Type":"ContainerDied","Data":"085c52d92c419cf64b41d1ae79845132baa487a0824496f899efa452067532d7"} Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.568845 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="085c52d92c419cf64b41d1ae79845132baa487a0824496f899efa452067532d7" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.571138 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6868c44cb9-4zxm5" event={"ID":"3806a27b-4a0f-439b-8660-d9ccd4bb0618","Type":"ContainerStarted","Data":"1e16ce44f59bba62455cbdf7f59fe0150efea77ce27a7027dd9fa2ba8ec0f7d5"} Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.571879 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6868c44cb9-4zxm5" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.573433 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c5a1-account-create-update-hfdmn" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.578722 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c5a1-account-create-update-hfdmn" event={"ID":"a532566c-ab86-4984-9212-1e48605d192b","Type":"ContainerDied","Data":"292856fca4df62efa63cfc164ccbb4f784eb6e19218a746589e6b7b1c3a0dd78"} Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.578779 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="292856fca4df62efa63cfc164ccbb4f784eb6e19218a746589e6b7b1c3a0dd78" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.604194 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mkvv4" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.612624 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6868c44cb9-4zxm5" podStartSLOduration=3.612602102 podStartE2EDuration="3.612602102s" podCreationTimestamp="2026-03-20 15:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:58:58.60690336 +0000 UTC m=+1197.820274729" watchObservedRunningTime="2026-03-20 15:58:58.612602102 +0000 UTC m=+1197.825973471" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.881932 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-n9vdf" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.950351 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7b436cd-ff29-4a9f-9e58-4c8760b1e012-operator-scripts\") pod \"c7b436cd-ff29-4a9f-9e58-4c8760b1e012\" (UID: \"c7b436cd-ff29-4a9f-9e58-4c8760b1e012\") " Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.950529 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5hhb\" (UniqueName: \"kubernetes.io/projected/c7b436cd-ff29-4a9f-9e58-4c8760b1e012-kube-api-access-v5hhb\") pod \"c7b436cd-ff29-4a9f-9e58-4c8760b1e012\" (UID: \"c7b436cd-ff29-4a9f-9e58-4c8760b1e012\") " Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.951316 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7b436cd-ff29-4a9f-9e58-4c8760b1e012-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c7b436cd-ff29-4a9f-9e58-4c8760b1e012" (UID: "c7b436cd-ff29-4a9f-9e58-4c8760b1e012"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:58:58 crc kubenswrapper[4730]: I0320 15:58:58.966061 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7b436cd-ff29-4a9f-9e58-4c8760b1e012-kube-api-access-v5hhb" (OuterVolumeSpecName: "kube-api-access-v5hhb") pod "c7b436cd-ff29-4a9f-9e58-4c8760b1e012" (UID: "c7b436cd-ff29-4a9f-9e58-4c8760b1e012"). InnerVolumeSpecName "kube-api-access-v5hhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.031117 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-87f2-account-create-update-lblc4" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.052862 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5hhb\" (UniqueName: \"kubernetes.io/projected/c7b436cd-ff29-4a9f-9e58-4c8760b1e012-kube-api-access-v5hhb\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.052907 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7b436cd-ff29-4a9f-9e58-4c8760b1e012-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.153693 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a132fe19-9294-49c6-9b1e-fe3eed7f4bae-operator-scripts\") pod \"a132fe19-9294-49c6-9b1e-fe3eed7f4bae\" (UID: \"a132fe19-9294-49c6-9b1e-fe3eed7f4bae\") " Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.153817 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhx74\" (UniqueName: \"kubernetes.io/projected/a132fe19-9294-49c6-9b1e-fe3eed7f4bae-kube-api-access-qhx74\") pod \"a132fe19-9294-49c6-9b1e-fe3eed7f4bae\" (UID: \"a132fe19-9294-49c6-9b1e-fe3eed7f4bae\") " Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.157644 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a132fe19-9294-49c6-9b1e-fe3eed7f4bae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a132fe19-9294-49c6-9b1e-fe3eed7f4bae" (UID: "a132fe19-9294-49c6-9b1e-fe3eed7f4bae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.158314 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a132fe19-9294-49c6-9b1e-fe3eed7f4bae-kube-api-access-qhx74" (OuterVolumeSpecName: "kube-api-access-qhx74") pod "a132fe19-9294-49c6-9b1e-fe3eed7f4bae" (UID: "a132fe19-9294-49c6-9b1e-fe3eed7f4bae"). InnerVolumeSpecName "kube-api-access-qhx74". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.255877 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a132fe19-9294-49c6-9b1e-fe3eed7f4bae-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.255912 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhx74\" (UniqueName: \"kubernetes.io/projected/a132fe19-9294-49c6-9b1e-fe3eed7f4bae-kube-api-access-qhx74\") on node \"crc\" DevicePath \"\"" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.408799 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mkvv4"] Mar 20 15:58:59 crc kubenswrapper[4730]: W0320 15:58:59.421994 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37dd8777_c196_4db2_af7a_5560a939e02c.slice/crio-83ca9233a49380ba616e8a33edba2e21a0e34d304c17191d6b978242a55feaa7 WatchSource:0}: Error finding container 83ca9233a49380ba616e8a33edba2e21a0e34d304c17191d6b978242a55feaa7: Status 404 returned error can't find the container with id 83ca9233a49380ba616e8a33edba2e21a0e34d304c17191d6b978242a55feaa7 Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.588308 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-87f2-account-create-update-lblc4" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.588309 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-87f2-account-create-update-lblc4" event={"ID":"a132fe19-9294-49c6-9b1e-fe3eed7f4bae","Type":"ContainerDied","Data":"10d5a54946b4bf571d21b82b12cd1c4555f94ac55a2e76b8dacd79b2d2c7f077"} Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.588447 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10d5a54946b4bf571d21b82b12cd1c4555f94ac55a2e76b8dacd79b2d2c7f077" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.590444 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-n9vdf" event={"ID":"c7b436cd-ff29-4a9f-9e58-4c8760b1e012","Type":"ContainerDied","Data":"46623510ee8353ece4203ec113c421119597c40cda4c774ba80d9347a1c73413"} Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.590471 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46623510ee8353ece4203ec113c421119597c40cda4c774ba80d9347a1c73413" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.590528 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-n9vdf" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.593516 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mkvv4" event={"ID":"37dd8777-c196-4db2-af7a-5560a939e02c","Type":"ContainerStarted","Data":"83ca9233a49380ba616e8a33edba2e21a0e34d304c17191d6b978242a55feaa7"} Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.761386 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-c8wgl"] Mar 20 15:58:59 crc kubenswrapper[4730]: E0320 15:58:59.761743 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a532566c-ab86-4984-9212-1e48605d192b" containerName="mariadb-account-create-update" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.761761 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="a532566c-ab86-4984-9212-1e48605d192b" containerName="mariadb-account-create-update" Mar 20 15:58:59 crc kubenswrapper[4730]: E0320 15:58:59.761780 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef40906b-a3dc-45b8-8bde-dd06eaaef85c" containerName="mariadb-database-create" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.761786 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef40906b-a3dc-45b8-8bde-dd06eaaef85c" containerName="mariadb-database-create" Mar 20 15:58:59 crc kubenswrapper[4730]: E0320 15:58:59.761797 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b436cd-ff29-4a9f-9e58-4c8760b1e012" containerName="mariadb-database-create" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.761805 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b436cd-ff29-4a9f-9e58-4c8760b1e012" containerName="mariadb-database-create" Mar 20 15:58:59 crc kubenswrapper[4730]: E0320 15:58:59.761819 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c40f368e-f905-465b-9af0-b0ecb753de79" containerName="mariadb-account-create-update" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.761825 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c40f368e-f905-465b-9af0-b0ecb753de79" containerName="mariadb-account-create-update" Mar 20 15:58:59 crc kubenswrapper[4730]: E0320 15:58:59.761846 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a132fe19-9294-49c6-9b1e-fe3eed7f4bae" containerName="mariadb-account-create-update" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.761852 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="a132fe19-9294-49c6-9b1e-fe3eed7f4bae" containerName="mariadb-account-create-update" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.762000 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="a132fe19-9294-49c6-9b1e-fe3eed7f4bae" containerName="mariadb-account-create-update" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.762010 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef40906b-a3dc-45b8-8bde-dd06eaaef85c" containerName="mariadb-database-create" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.762025 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="a532566c-ab86-4984-9212-1e48605d192b" containerName="mariadb-account-create-update" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.762037 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b436cd-ff29-4a9f-9e58-4c8760b1e012" containerName="mariadb-database-create" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.762046 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c40f368e-f905-465b-9af0-b0ecb753de79" containerName="mariadb-account-create-update" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.762562 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c8wgl" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.775226 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-c8wgl"] Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.786267 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.864863 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a37760f5-4ee5-4b95-9364-e81582b732c7-operator-scripts\") pod \"root-account-create-update-c8wgl\" (UID: \"a37760f5-4ee5-4b95-9364-e81582b732c7\") " pod="openstack/root-account-create-update-c8wgl" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.864953 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjz56\" (UniqueName: \"kubernetes.io/projected/a37760f5-4ee5-4b95-9364-e81582b732c7-kube-api-access-tjz56\") pod \"root-account-create-update-c8wgl\" (UID: \"a37760f5-4ee5-4b95-9364-e81582b732c7\") " pod="openstack/root-account-create-update-c8wgl" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.966132 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a37760f5-4ee5-4b95-9364-e81582b732c7-operator-scripts\") pod \"root-account-create-update-c8wgl\" (UID: \"a37760f5-4ee5-4b95-9364-e81582b732c7\") " pod="openstack/root-account-create-update-c8wgl" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.966238 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjz56\" (UniqueName: \"kubernetes.io/projected/a37760f5-4ee5-4b95-9364-e81582b732c7-kube-api-access-tjz56\") pod \"root-account-create-update-c8wgl\" (UID: \"a37760f5-4ee5-4b95-9364-e81582b732c7\") " pod="openstack/root-account-create-update-c8wgl" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.967177 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a37760f5-4ee5-4b95-9364-e81582b732c7-operator-scripts\") pod \"root-account-create-update-c8wgl\" (UID: \"a37760f5-4ee5-4b95-9364-e81582b732c7\") " pod="openstack/root-account-create-update-c8wgl" Mar 20 15:58:59 crc kubenswrapper[4730]: I0320 15:58:59.987186 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjz56\" (UniqueName: \"kubernetes.io/projected/a37760f5-4ee5-4b95-9364-e81582b732c7-kube-api-access-tjz56\") pod \"root-account-create-update-c8wgl\" (UID: \"a37760f5-4ee5-4b95-9364-e81582b732c7\") " pod="openstack/root-account-create-update-c8wgl" Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.142748 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c8wgl" Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.414160 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-7d8lv"] Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.415341 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7d8lv" Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.417782 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.418062 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.418482 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.427822 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-7d8lv"] Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.473935 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/167282ce-29fc-44db-9b0b-baf2c956f433-dispersionconf\") pod \"swift-ring-rebalance-7d8lv\" (UID: \"167282ce-29fc-44db-9b0b-baf2c956f433\") " pod="openstack/swift-ring-rebalance-7d8lv" Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.473972 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/167282ce-29fc-44db-9b0b-baf2c956f433-scripts\") pod \"swift-ring-rebalance-7d8lv\" (UID: \"167282ce-29fc-44db-9b0b-baf2c956f433\") " pod="openstack/swift-ring-rebalance-7d8lv" Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.474029 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2c9def6e-27a0-4543-8d3c-07b3e4005b33-etc-swift\") pod \"swift-storage-0\" (UID: \"2c9def6e-27a0-4543-8d3c-07b3e4005b33\") " pod="openstack/swift-storage-0" Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.474063 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzrnr\" (UniqueName: \"kubernetes.io/projected/167282ce-29fc-44db-9b0b-baf2c956f433-kube-api-access-dzrnr\") pod \"swift-ring-rebalance-7d8lv\" (UID: \"167282ce-29fc-44db-9b0b-baf2c956f433\") " pod="openstack/swift-ring-rebalance-7d8lv" Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.474123 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/167282ce-29fc-44db-9b0b-baf2c956f433-etc-swift\") pod \"swift-ring-rebalance-7d8lv\" (UID: \"167282ce-29fc-44db-9b0b-baf2c956f433\") " pod="openstack/swift-ring-rebalance-7d8lv" Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.474141 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/167282ce-29fc-44db-9b0b-baf2c956f433-ring-data-devices\") pod \"swift-ring-rebalance-7d8lv\" (UID: \"167282ce-29fc-44db-9b0b-baf2c956f433\") " pod="openstack/swift-ring-rebalance-7d8lv" Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.474182 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/167282ce-29fc-44db-9b0b-baf2c956f433-swiftconf\") pod \"swift-ring-rebalance-7d8lv\" (UID: \"167282ce-29fc-44db-9b0b-baf2c956f433\") " pod="openstack/swift-ring-rebalance-7d8lv" Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.474206 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/167282ce-29fc-44db-9b0b-baf2c956f433-combined-ca-bundle\") pod \"swift-ring-rebalance-7d8lv\" (UID: \"167282ce-29fc-44db-9b0b-baf2c956f433\") " pod="openstack/swift-ring-rebalance-7d8lv" Mar 20 15:59:00 crc kubenswrapper[4730]: E0320 15:59:00.474421 4730 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 20 15:59:00 crc kubenswrapper[4730]: E0320 15:59:00.474451 4730 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 20 15:59:00 crc kubenswrapper[4730]: E0320 15:59:00.474492 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c9def6e-27a0-4543-8d3c-07b3e4005b33-etc-swift podName:2c9def6e-27a0-4543-8d3c-07b3e4005b33 nodeName:}" failed. No retries permitted until 2026-03-20 15:59:04.474479281 +0000 UTC m=+1203.687850650 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2c9def6e-27a0-4543-8d3c-07b3e4005b33-etc-swift") pod "swift-storage-0" (UID: "2c9def6e-27a0-4543-8d3c-07b3e4005b33") : configmap "swift-ring-files" not found Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.575308 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/167282ce-29fc-44db-9b0b-baf2c956f433-etc-swift\") pod \"swift-ring-rebalance-7d8lv\" (UID: \"167282ce-29fc-44db-9b0b-baf2c956f433\") " pod="openstack/swift-ring-rebalance-7d8lv" Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.575360 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/167282ce-29fc-44db-9b0b-baf2c956f433-ring-data-devices\") pod \"swift-ring-rebalance-7d8lv\" (UID: \"167282ce-29fc-44db-9b0b-baf2c956f433\") " pod="openstack/swift-ring-rebalance-7d8lv" Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.575400 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/167282ce-29fc-44db-9b0b-baf2c956f433-swiftconf\") pod \"swift-ring-rebalance-7d8lv\" (UID: \"167282ce-29fc-44db-9b0b-baf2c956f433\") " pod="openstack/swift-ring-rebalance-7d8lv" Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.575452 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/167282ce-29fc-44db-9b0b-baf2c956f433-combined-ca-bundle\") pod \"swift-ring-rebalance-7d8lv\" (UID: \"167282ce-29fc-44db-9b0b-baf2c956f433\") " pod="openstack/swift-ring-rebalance-7d8lv" Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.575521 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/167282ce-29fc-44db-9b0b-baf2c956f433-dispersionconf\") pod \"swift-ring-rebalance-7d8lv\" (UID: \"167282ce-29fc-44db-9b0b-baf2c956f433\") " pod="openstack/swift-ring-rebalance-7d8lv" Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.575547 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/167282ce-29fc-44db-9b0b-baf2c956f433-scripts\") pod \"swift-ring-rebalance-7d8lv\" (UID: \"167282ce-29fc-44db-9b0b-baf2c956f433\") " pod="openstack/swift-ring-rebalance-7d8lv" Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.575627 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzrnr\" (UniqueName: \"kubernetes.io/projected/167282ce-29fc-44db-9b0b-baf2c956f433-kube-api-access-dzrnr\") pod \"swift-ring-rebalance-7d8lv\" (UID: \"167282ce-29fc-44db-9b0b-baf2c956f433\") " pod="openstack/swift-ring-rebalance-7d8lv" Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.577103 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/167282ce-29fc-44db-9b0b-baf2c956f433-ring-data-devices\") pod \"swift-ring-rebalance-7d8lv\" (UID: \"167282ce-29fc-44db-9b0b-baf2c956f433\") " pod="openstack/swift-ring-rebalance-7d8lv" Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.579271 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/167282ce-29fc-44db-9b0b-baf2c956f433-etc-swift\") pod \"swift-ring-rebalance-7d8lv\" (UID: \"167282ce-29fc-44db-9b0b-baf2c956f433\") " pod="openstack/swift-ring-rebalance-7d8lv" Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.583352 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/167282ce-29fc-44db-9b0b-baf2c956f433-swiftconf\") pod \"swift-ring-rebalance-7d8lv\" (UID: \"167282ce-29fc-44db-9b0b-baf2c956f433\") " pod="openstack/swift-ring-rebalance-7d8lv" Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.586236 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/167282ce-29fc-44db-9b0b-baf2c956f433-scripts\") pod \"swift-ring-rebalance-7d8lv\" (UID: \"167282ce-29fc-44db-9b0b-baf2c956f433\") " pod="openstack/swift-ring-rebalance-7d8lv" Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.588593 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/167282ce-29fc-44db-9b0b-baf2c956f433-dispersionconf\") pod \"swift-ring-rebalance-7d8lv\" (UID: \"167282ce-29fc-44db-9b0b-baf2c956f433\") " pod="openstack/swift-ring-rebalance-7d8lv" Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.588788 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/167282ce-29fc-44db-9b0b-baf2c956f433-combined-ca-bundle\") pod \"swift-ring-rebalance-7d8lv\" (UID: \"167282ce-29fc-44db-9b0b-baf2c956f433\") " pod="openstack/swift-ring-rebalance-7d8lv" Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.593142 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzrnr\" (UniqueName: \"kubernetes.io/projected/167282ce-29fc-44db-9b0b-baf2c956f433-kube-api-access-dzrnr\") pod \"swift-ring-rebalance-7d8lv\" (UID: \"167282ce-29fc-44db-9b0b-baf2c956f433\") " pod="openstack/swift-ring-rebalance-7d8lv" Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.675197 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-c8wgl"] Mar 20 15:59:00 crc kubenswrapper[4730]: I0320 15:59:00.753288 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7d8lv" Mar 20 15:59:02 crc kubenswrapper[4730]: I0320 15:59:02.310731 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 20 15:59:02 crc kubenswrapper[4730]: I0320 15:59:02.626361 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdd3845a-3723-438f-aa58-606451baed6c","Type":"ContainerStarted","Data":"83d469de2778c173f20bbd31bfd4fc16492ac416000fbb48eb639e2a00a91feb"} Mar 20 15:59:02 crc kubenswrapper[4730]: I0320 15:59:02.627764 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-c8wgl" event={"ID":"a37760f5-4ee5-4b95-9364-e81582b732c7","Type":"ContainerStarted","Data":"7f523e2f068601b64366f703836d659d597d40e8112309dd07122d42b1769869"} Mar 20 15:59:02 crc kubenswrapper[4730]: I0320 15:59:02.627784 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-c8wgl" event={"ID":"a37760f5-4ee5-4b95-9364-e81582b732c7","Type":"ContainerStarted","Data":"7f0455b4d6e218aa0ce0046f8dd059ae557ff987aece6c438eedd673d8f6f282"} Mar 20 15:59:02 crc kubenswrapper[4730]: I0320 15:59:02.650799 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=8.444930863 podStartE2EDuration="47.650783394s" podCreationTimestamp="2026-03-20 15:58:15 +0000 UTC" firstStartedPulling="2026-03-20 15:58:23.112610971 +0000 UTC m=+1162.325982340" lastFinishedPulling="2026-03-20 15:59:02.318463502 +0000 UTC m=+1201.531834871" observedRunningTime="2026-03-20 15:59:02.648635603 +0000 UTC m=+1201.862006992" watchObservedRunningTime="2026-03-20 15:59:02.650783394 +0000 UTC m=+1201.864154763" Mar 20 15:59:02 crc kubenswrapper[4730]: I0320 15:59:02.666438 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-c8wgl" podStartSLOduration=3.666418889 podStartE2EDuration="3.666418889s" podCreationTimestamp="2026-03-20 15:58:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:59:02.662925059 +0000 UTC m=+1201.876296438" watchObservedRunningTime="2026-03-20 15:59:02.666418889 +0000 UTC m=+1201.879790258" Mar 20 15:59:02 crc kubenswrapper[4730]: I0320 15:59:02.800831 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-7d8lv"] Mar 20 15:59:02 crc kubenswrapper[4730]: W0320 15:59:02.811557 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod167282ce_29fc_44db_9b0b_baf2c956f433.slice/crio-c2bed297b15aeb66e5504095b19b123685071c689e915cfbc2281dd9c7ff81a2 WatchSource:0}: Error finding container c2bed297b15aeb66e5504095b19b123685071c689e915cfbc2281dd9c7ff81a2: Status 404 returned error can't find the container with id c2bed297b15aeb66e5504095b19b123685071c689e915cfbc2281dd9c7ff81a2 Mar 20 15:59:03 crc kubenswrapper[4730]: I0320 15:59:03.637081 4730 generic.go:334] "Generic (PLEG): container finished" podID="a37760f5-4ee5-4b95-9364-e81582b732c7" containerID="7f523e2f068601b64366f703836d659d597d40e8112309dd07122d42b1769869" exitCode=0 Mar 20 15:59:03 crc kubenswrapper[4730]: I0320 15:59:03.637159 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-c8wgl" event={"ID":"a37760f5-4ee5-4b95-9364-e81582b732c7","Type":"ContainerDied","Data":"7f523e2f068601b64366f703836d659d597d40e8112309dd07122d42b1769869"} Mar 20 15:59:03 crc kubenswrapper[4730]: I0320 15:59:03.642462 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7d8lv" event={"ID":"167282ce-29fc-44db-9b0b-baf2c956f433","Type":"ContainerStarted","Data":"c2bed297b15aeb66e5504095b19b123685071c689e915cfbc2281dd9c7ff81a2"} Mar 20 15:59:04 crc kubenswrapper[4730]: I0320 15:59:04.486511 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2c9def6e-27a0-4543-8d3c-07b3e4005b33-etc-swift\") pod \"swift-storage-0\" (UID: \"2c9def6e-27a0-4543-8d3c-07b3e4005b33\") " pod="openstack/swift-storage-0" Mar 20 15:59:04 crc kubenswrapper[4730]: E0320 15:59:04.486737 4730 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 20 15:59:04 crc kubenswrapper[4730]: E0320 15:59:04.486779 4730 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 20 15:59:04 crc kubenswrapper[4730]: E0320 15:59:04.486843 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c9def6e-27a0-4543-8d3c-07b3e4005b33-etc-swift podName:2c9def6e-27a0-4543-8d3c-07b3e4005b33 nodeName:}" failed. No retries permitted until 2026-03-20 15:59:12.486825499 +0000 UTC m=+1211.700196868 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2c9def6e-27a0-4543-8d3c-07b3e4005b33-etc-swift") pod "swift-storage-0" (UID: "2c9def6e-27a0-4543-8d3c-07b3e4005b33") : configmap "swift-ring-files" not found Mar 20 15:59:05 crc kubenswrapper[4730]: I0320 15:59:05.717466 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6868c44cb9-4zxm5" Mar 20 15:59:05 crc kubenswrapper[4730]: I0320 15:59:05.761914 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c8wgl" Mar 20 15:59:05 crc kubenswrapper[4730]: I0320 15:59:05.788435 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-648686d659-c5gtt"] Mar 20 15:59:05 crc kubenswrapper[4730]: I0320 15:59:05.788651 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-648686d659-c5gtt" podUID="68e4e5c3-825d-477e-a403-0cae45a86806" containerName="dnsmasq-dns" containerID="cri-o://fe73524182281c96c8ff70a33be500f38666a259e3cec724e30e6c153a1251ae" gracePeriod=10 Mar 20 15:59:05 crc kubenswrapper[4730]: I0320 15:59:05.815716 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjz56\" (UniqueName: \"kubernetes.io/projected/a37760f5-4ee5-4b95-9364-e81582b732c7-kube-api-access-tjz56\") pod \"a37760f5-4ee5-4b95-9364-e81582b732c7\" (UID: \"a37760f5-4ee5-4b95-9364-e81582b732c7\") " Mar 20 15:59:05 crc kubenswrapper[4730]: I0320 15:59:05.815946 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a37760f5-4ee5-4b95-9364-e81582b732c7-operator-scripts\") pod \"a37760f5-4ee5-4b95-9364-e81582b732c7\" (UID: \"a37760f5-4ee5-4b95-9364-e81582b732c7\") " Mar 20 15:59:05 crc kubenswrapper[4730]: I0320 15:59:05.817120 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a37760f5-4ee5-4b95-9364-e81582b732c7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a37760f5-4ee5-4b95-9364-e81582b732c7" (UID: "a37760f5-4ee5-4b95-9364-e81582b732c7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:59:05 crc kubenswrapper[4730]: I0320 15:59:05.843446 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a37760f5-4ee5-4b95-9364-e81582b732c7-kube-api-access-tjz56" (OuterVolumeSpecName: "kube-api-access-tjz56") pod "a37760f5-4ee5-4b95-9364-e81582b732c7" (UID: "a37760f5-4ee5-4b95-9364-e81582b732c7"). InnerVolumeSpecName "kube-api-access-tjz56". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:59:05 crc kubenswrapper[4730]: I0320 15:59:05.918105 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a37760f5-4ee5-4b95-9364-e81582b732c7-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:05 crc kubenswrapper[4730]: I0320 15:59:05.918142 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjz56\" (UniqueName: \"kubernetes.io/projected/a37760f5-4ee5-4b95-9364-e81582b732c7-kube-api-access-tjz56\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:06 crc kubenswrapper[4730]: I0320 15:59:06.679388 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c8wgl" Mar 20 15:59:06 crc kubenswrapper[4730]: I0320 15:59:06.681466 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-c8wgl" event={"ID":"a37760f5-4ee5-4b95-9364-e81582b732c7","Type":"ContainerDied","Data":"7f0455b4d6e218aa0ce0046f8dd059ae557ff987aece6c438eedd673d8f6f282"} Mar 20 15:59:06 crc kubenswrapper[4730]: I0320 15:59:06.681523 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f0455b4d6e218aa0ce0046f8dd059ae557ff987aece6c438eedd673d8f6f282" Mar 20 15:59:06 crc kubenswrapper[4730]: I0320 15:59:06.683077 4730 generic.go:334] "Generic (PLEG): container finished" podID="68e4e5c3-825d-477e-a403-0cae45a86806" containerID="fe73524182281c96c8ff70a33be500f38666a259e3cec724e30e6c153a1251ae" exitCode=0 Mar 20 15:59:06 crc kubenswrapper[4730]: I0320 15:59:06.683171 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-648686d659-c5gtt" event={"ID":"68e4e5c3-825d-477e-a403-0cae45a86806","Type":"ContainerDied","Data":"fe73524182281c96c8ff70a33be500f38666a259e3cec724e30e6c153a1251ae"} Mar 20 15:59:06 crc kubenswrapper[4730]: I0320 15:59:06.724488 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:06 crc kubenswrapper[4730]: I0320 15:59:06.924046 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 20 15:59:07 crc kubenswrapper[4730]: I0320 15:59:07.696872 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-648686d659-c5gtt" podUID="68e4e5c3-825d-477e-a403-0cae45a86806" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: connect: connection refused" Mar 20 15:59:09 crc kubenswrapper[4730]: I0320 15:59:09.145146 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gtrnp" podUID="31651551-edb9-4793-a752-39fa60a85ee3" containerName="ovn-controller" probeResult="failure" output=< Mar 20 15:59:09 crc kubenswrapper[4730]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 20 15:59:09 crc kubenswrapper[4730]: > Mar 20 15:59:10 crc kubenswrapper[4730]: I0320 15:59:10.740014 4730 generic.go:334] "Generic (PLEG): container finished" podID="df9ca02d-e20f-4f55-ba14-92b91812afb6" containerID="3d7e7a0cabaf1b38c1891734e06c9106a1e8c0af0454fdefaf773422d1dcf747" exitCode=0 Mar 20 15:59:10 crc kubenswrapper[4730]: I0320 15:59:10.740325 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"df9ca02d-e20f-4f55-ba14-92b91812afb6","Type":"ContainerDied","Data":"3d7e7a0cabaf1b38c1891734e06c9106a1e8c0af0454fdefaf773422d1dcf747"} Mar 20 15:59:10 crc kubenswrapper[4730]: I0320 15:59:10.743149 4730 generic.go:334] "Generic (PLEG): container finished" podID="dfd9111c-a9f4-4874-91fc-c0ef68ae09a3" containerID="13985a1e2e3d58d396be0af6437cdcdb0bbdea54308502442707c077b36e9713" exitCode=0 Mar 20 15:59:10 crc kubenswrapper[4730]: I0320 15:59:10.743176 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3","Type":"ContainerDied","Data":"13985a1e2e3d58d396be0af6437cdcdb0bbdea54308502442707c077b36e9713"} Mar 20 15:59:11 crc kubenswrapper[4730]: I0320 15:59:11.242615 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-c8wgl"] Mar 20 15:59:11 crc kubenswrapper[4730]: I0320 15:59:11.248996 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-c8wgl"] Mar 20 15:59:11 crc kubenswrapper[4730]: I0320 15:59:11.557595 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a37760f5-4ee5-4b95-9364-e81582b732c7" path="/var/lib/kubelet/pods/a37760f5-4ee5-4b95-9364-e81582b732c7/volumes" Mar 20 15:59:11 crc kubenswrapper[4730]: I0320 15:59:11.753014 4730 generic.go:334] "Generic (PLEG): container finished" podID="8043f69c-832c-4afa-a9b9-211507664805" containerID="5873082b81a5b9253ac47bf2bf3866502e40b3ccab836a111c3bd8134e015ee5" exitCode=0 Mar 20 15:59:11 crc kubenswrapper[4730]: I0320 15:59:11.753226 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8043f69c-832c-4afa-a9b9-211507664805","Type":"ContainerDied","Data":"5873082b81a5b9253ac47bf2bf3866502e40b3ccab836a111c3bd8134e015ee5"} Mar 20 15:59:12 crc kubenswrapper[4730]: I0320 15:59:12.575677 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2c9def6e-27a0-4543-8d3c-07b3e4005b33-etc-swift\") pod \"swift-storage-0\" (UID: \"2c9def6e-27a0-4543-8d3c-07b3e4005b33\") " pod="openstack/swift-storage-0" Mar 20 15:59:12 crc kubenswrapper[4730]: E0320 15:59:12.575872 4730 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 20 15:59:12 crc kubenswrapper[4730]: E0320 15:59:12.575899 4730 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 20 15:59:12 crc kubenswrapper[4730]: E0320 15:59:12.575958 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c9def6e-27a0-4543-8d3c-07b3e4005b33-etc-swift podName:2c9def6e-27a0-4543-8d3c-07b3e4005b33 nodeName:}" failed. No retries permitted until 2026-03-20 15:59:28.575940097 +0000 UTC m=+1227.789311466 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2c9def6e-27a0-4543-8d3c-07b3e4005b33-etc-swift") pod "swift-storage-0" (UID: "2c9def6e-27a0-4543-8d3c-07b3e4005b33") : configmap "swift-ring-files" not found Mar 20 15:59:12 crc kubenswrapper[4730]: I0320 15:59:12.696688 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-648686d659-c5gtt" podUID="68e4e5c3-825d-477e-a403-0cae45a86806" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: connect: connection refused" Mar 20 15:59:13 crc kubenswrapper[4730]: I0320 15:59:13.781095 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-648686d659-c5gtt" Mar 20 15:59:13 crc kubenswrapper[4730]: I0320 15:59:13.781215 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-648686d659-c5gtt" event={"ID":"68e4e5c3-825d-477e-a403-0cae45a86806","Type":"ContainerDied","Data":"4818c9171456a6ff54f39a98823d5ad13ee602182702fbabd7963c0de43a3bf8"} Mar 20 15:59:13 crc kubenswrapper[4730]: I0320 15:59:13.781779 4730 scope.go:117] "RemoveContainer" containerID="fe73524182281c96c8ff70a33be500f38666a259e3cec724e30e6c153a1251ae" Mar 20 15:59:13 crc kubenswrapper[4730]: I0320 15:59:13.811445 4730 scope.go:117] "RemoveContainer" containerID="c68e3d34495e1743eae1f1377e46b36b943735fc7af94f6c4a34f2182614de74" Mar 20 15:59:13 crc kubenswrapper[4730]: I0320 15:59:13.900108 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68e4e5c3-825d-477e-a403-0cae45a86806-dns-svc\") pod \"68e4e5c3-825d-477e-a403-0cae45a86806\" (UID: \"68e4e5c3-825d-477e-a403-0cae45a86806\") " Mar 20 15:59:13 crc kubenswrapper[4730]: I0320 15:59:13.900153 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shdlr\" (UniqueName: \"kubernetes.io/projected/68e4e5c3-825d-477e-a403-0cae45a86806-kube-api-access-shdlr\") pod \"68e4e5c3-825d-477e-a403-0cae45a86806\" (UID: \"68e4e5c3-825d-477e-a403-0cae45a86806\") " Mar 20 15:59:13 crc kubenswrapper[4730]: I0320 15:59:13.900269 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68e4e5c3-825d-477e-a403-0cae45a86806-ovsdbserver-nb\") pod \"68e4e5c3-825d-477e-a403-0cae45a86806\" (UID: \"68e4e5c3-825d-477e-a403-0cae45a86806\") " Mar 20 15:59:13 crc kubenswrapper[4730]: I0320 15:59:13.900315 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68e4e5c3-825d-477e-a403-0cae45a86806-ovsdbserver-sb\") pod \"68e4e5c3-825d-477e-a403-0cae45a86806\" (UID: \"68e4e5c3-825d-477e-a403-0cae45a86806\") " Mar 20 15:59:13 crc kubenswrapper[4730]: I0320 15:59:13.900370 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68e4e5c3-825d-477e-a403-0cae45a86806-config\") pod \"68e4e5c3-825d-477e-a403-0cae45a86806\" (UID: \"68e4e5c3-825d-477e-a403-0cae45a86806\") " Mar 20 15:59:13 crc kubenswrapper[4730]: I0320 15:59:13.904580 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68e4e5c3-825d-477e-a403-0cae45a86806-kube-api-access-shdlr" (OuterVolumeSpecName: "kube-api-access-shdlr") pod "68e4e5c3-825d-477e-a403-0cae45a86806" (UID: "68e4e5c3-825d-477e-a403-0cae45a86806"). InnerVolumeSpecName "kube-api-access-shdlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:59:13 crc kubenswrapper[4730]: I0320 15:59:13.937293 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68e4e5c3-825d-477e-a403-0cae45a86806-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "68e4e5c3-825d-477e-a403-0cae45a86806" (UID: "68e4e5c3-825d-477e-a403-0cae45a86806"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:59:13 crc kubenswrapper[4730]: I0320 15:59:13.942631 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68e4e5c3-825d-477e-a403-0cae45a86806-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "68e4e5c3-825d-477e-a403-0cae45a86806" (UID: "68e4e5c3-825d-477e-a403-0cae45a86806"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:59:13 crc kubenswrapper[4730]: I0320 15:59:13.946074 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68e4e5c3-825d-477e-a403-0cae45a86806-config" (OuterVolumeSpecName: "config") pod "68e4e5c3-825d-477e-a403-0cae45a86806" (UID: "68e4e5c3-825d-477e-a403-0cae45a86806"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:59:13 crc kubenswrapper[4730]: I0320 15:59:13.982591 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68e4e5c3-825d-477e-a403-0cae45a86806-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "68e4e5c3-825d-477e-a403-0cae45a86806" (UID: "68e4e5c3-825d-477e-a403-0cae45a86806"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:59:14 crc kubenswrapper[4730]: I0320 15:59:14.002563 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68e4e5c3-825d-477e-a403-0cae45a86806-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:14 crc kubenswrapper[4730]: I0320 15:59:14.002599 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shdlr\" (UniqueName: \"kubernetes.io/projected/68e4e5c3-825d-477e-a403-0cae45a86806-kube-api-access-shdlr\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:14 crc kubenswrapper[4730]: I0320 15:59:14.002612 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68e4e5c3-825d-477e-a403-0cae45a86806-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:14 crc kubenswrapper[4730]: I0320 15:59:14.002620 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68e4e5c3-825d-477e-a403-0cae45a86806-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:14 crc kubenswrapper[4730]: I0320 15:59:14.002628 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68e4e5c3-825d-477e-a403-0cae45a86806-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:14 crc kubenswrapper[4730]: I0320 15:59:14.148064 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gtrnp" podUID="31651551-edb9-4793-a752-39fa60a85ee3" containerName="ovn-controller" probeResult="failure" output=< Mar 20 15:59:14 crc kubenswrapper[4730]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 20 15:59:14 crc kubenswrapper[4730]: > Mar 20 15:59:14 crc kubenswrapper[4730]: I0320 15:59:14.790856 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7d8lv" event={"ID":"167282ce-29fc-44db-9b0b-baf2c956f433","Type":"ContainerStarted","Data":"768be9518c03c37026c048245f752f8e9492e3f207a6cde3432392aa9859edc5"} Mar 20 15:59:14 crc kubenswrapper[4730]: I0320 15:59:14.792743 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mkvv4" event={"ID":"37dd8777-c196-4db2-af7a-5560a939e02c","Type":"ContainerStarted","Data":"45a09c4f4bffe31b4f9cf83737f4a3331b9ba65b3e4bbf1a00d15070f2dd1fbb"} Mar 20 15:59:14 crc kubenswrapper[4730]: I0320 15:59:14.794507 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-648686d659-c5gtt" Mar 20 15:59:14 crc kubenswrapper[4730]: I0320 15:59:14.797546 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8043f69c-832c-4afa-a9b9-211507664805","Type":"ContainerStarted","Data":"f4ff5614730f4bee870729b9dbea193a82cb2fbf2b64a4650757a12a3469fc3b"} Mar 20 15:59:14 crc kubenswrapper[4730]: I0320 15:59:14.797970 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:59:14 crc kubenswrapper[4730]: I0320 15:59:14.800151 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3","Type":"ContainerStarted","Data":"3a783d296547ab247634b62ed131b57fa9392453e5aadc95036d56c15ea1686f"} Mar 20 15:59:14 crc kubenswrapper[4730]: I0320 15:59:14.800690 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 20 15:59:14 crc kubenswrapper[4730]: I0320 15:59:14.802418 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"df9ca02d-e20f-4f55-ba14-92b91812afb6","Type":"ContainerStarted","Data":"ee087c818bbb63c9bd400095c1dd7f6ec4709f200b2191330d19a5435b2266e6"} Mar 20 15:59:14 crc kubenswrapper[4730]: I0320 15:59:14.802725 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:59:14 crc kubenswrapper[4730]: I0320 15:59:14.829926 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-7d8lv" podStartSLOduration=4.132109491 podStartE2EDuration="14.829906486s" podCreationTimestamp="2026-03-20 15:59:00 +0000 UTC" firstStartedPulling="2026-03-20 15:59:02.813914899 +0000 UTC m=+1202.027286268" lastFinishedPulling="2026-03-20 15:59:13.511711824 +0000 UTC m=+1212.725083263" observedRunningTime="2026-03-20 15:59:14.82336375 +0000 UTC m=+1214.036735119" watchObservedRunningTime="2026-03-20 15:59:14.829906486 +0000 UTC m=+1214.043277855" Mar 20 15:59:14 crc kubenswrapper[4730]: I0320 15:59:14.850019 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-mkvv4" podStartSLOduration=2.682739028 podStartE2EDuration="16.850002247s" podCreationTimestamp="2026-03-20 15:58:58 +0000 UTC" firstStartedPulling="2026-03-20 15:58:59.425012204 +0000 UTC m=+1198.638383583" lastFinishedPulling="2026-03-20 15:59:13.592275433 +0000 UTC m=+1212.805646802" observedRunningTime="2026-03-20 15:59:14.843541683 +0000 UTC m=+1214.056913052" watchObservedRunningTime="2026-03-20 15:59:14.850002247 +0000 UTC m=+1214.063373606" Mar 20 15:59:14 crc kubenswrapper[4730]: I0320 15:59:14.870263 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/notifications-rabbitmq-server-0" podStartSLOduration=53.485550587 podStartE2EDuration="1m6.870220991s" podCreationTimestamp="2026-03-20 15:58:08 +0000 UTC" firstStartedPulling="2026-03-20 15:58:22.761890696 +0000 UTC m=+1161.975262065" lastFinishedPulling="2026-03-20 15:58:36.14656111 +0000 UTC m=+1175.359932469" observedRunningTime="2026-03-20 15:59:14.866436514 +0000 UTC m=+1214.079807923" watchObservedRunningTime="2026-03-20 15:59:14.870220991 +0000 UTC m=+1214.083592360" Mar 20 15:59:14 crc kubenswrapper[4730]: I0320 15:59:14.900173 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=52.950118835 podStartE2EDuration="1m6.900154332s" podCreationTimestamp="2026-03-20 15:58:08 +0000 UTC" firstStartedPulling="2026-03-20 15:58:22.21965882 +0000 UTC m=+1161.433030189" lastFinishedPulling="2026-03-20 15:58:36.169694317 +0000 UTC m=+1175.383065686" observedRunningTime="2026-03-20 15:59:14.891577388 +0000 UTC m=+1214.104948757" watchObservedRunningTime="2026-03-20 15:59:14.900154332 +0000 UTC m=+1214.113525701" Mar 20 15:59:14 crc kubenswrapper[4730]: I0320 15:59:14.911404 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-648686d659-c5gtt"] Mar 20 15:59:14 crc kubenswrapper[4730]: I0320 15:59:14.917916 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-648686d659-c5gtt"] Mar 20 15:59:14 crc kubenswrapper[4730]: I0320 15:59:14.931846 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=55.033145057 podStartE2EDuration="1m7.931827052s" podCreationTimestamp="2026-03-20 15:58:07 +0000 UTC" firstStartedPulling="2026-03-20 15:58:22.7831313 +0000 UTC m=+1161.996502669" lastFinishedPulling="2026-03-20 15:58:35.681813295 +0000 UTC m=+1174.895184664" observedRunningTime="2026-03-20 15:59:14.924132093 +0000 UTC m=+1214.137503482" watchObservedRunningTime="2026-03-20 15:59:14.931827052 +0000 UTC m=+1214.145198421" Mar 20 15:59:15 crc kubenswrapper[4730]: I0320 15:59:15.547965 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68e4e5c3-825d-477e-a403-0cae45a86806" path="/var/lib/kubelet/pods/68e4e5c3-825d-477e-a403-0cae45a86806/volumes" Mar 20 15:59:16 crc kubenswrapper[4730]: I0320 15:59:16.238978 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-d92d8"] Mar 20 15:59:16 crc kubenswrapper[4730]: E0320 15:59:16.239341 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e4e5c3-825d-477e-a403-0cae45a86806" containerName="init" Mar 20 15:59:16 crc kubenswrapper[4730]: I0320 15:59:16.239353 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e4e5c3-825d-477e-a403-0cae45a86806" containerName="init" Mar 20 15:59:16 crc kubenswrapper[4730]: E0320 15:59:16.239364 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a37760f5-4ee5-4b95-9364-e81582b732c7" containerName="mariadb-account-create-update" Mar 20 15:59:16 crc kubenswrapper[4730]: I0320 15:59:16.239369 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="a37760f5-4ee5-4b95-9364-e81582b732c7" containerName="mariadb-account-create-update" Mar 20 15:59:16 crc kubenswrapper[4730]: E0320 15:59:16.239381 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e4e5c3-825d-477e-a403-0cae45a86806" containerName="dnsmasq-dns" Mar 20 15:59:16 crc kubenswrapper[4730]: I0320 15:59:16.239387 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e4e5c3-825d-477e-a403-0cae45a86806" containerName="dnsmasq-dns" Mar 20 15:59:16 crc kubenswrapper[4730]: I0320 15:59:16.239532 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e4e5c3-825d-477e-a403-0cae45a86806" containerName="dnsmasq-dns" Mar 20 15:59:16 crc kubenswrapper[4730]: I0320 15:59:16.239541 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="a37760f5-4ee5-4b95-9364-e81582b732c7" containerName="mariadb-account-create-update" Mar 20 15:59:16 crc kubenswrapper[4730]: I0320 15:59:16.240072 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d92d8" Mar 20 15:59:16 crc kubenswrapper[4730]: I0320 15:59:16.247018 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 20 15:59:16 crc kubenswrapper[4730]: I0320 15:59:16.254761 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-d92d8"] Mar 20 15:59:16 crc kubenswrapper[4730]: I0320 15:59:16.354351 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb89m\" (UniqueName: \"kubernetes.io/projected/ed167127-4e44-4877-bf9b-dbb6a23a8b3f-kube-api-access-pb89m\") pod \"root-account-create-update-d92d8\" (UID: \"ed167127-4e44-4877-bf9b-dbb6a23a8b3f\") " pod="openstack/root-account-create-update-d92d8" Mar 20 15:59:16 crc kubenswrapper[4730]: I0320 15:59:16.354817 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed167127-4e44-4877-bf9b-dbb6a23a8b3f-operator-scripts\") pod \"root-account-create-update-d92d8\" (UID: \"ed167127-4e44-4877-bf9b-dbb6a23a8b3f\") " pod="openstack/root-account-create-update-d92d8" Mar 20 15:59:16 crc kubenswrapper[4730]: I0320 15:59:16.456183 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed167127-4e44-4877-bf9b-dbb6a23a8b3f-operator-scripts\") pod \"root-account-create-update-d92d8\" (UID: \"ed167127-4e44-4877-bf9b-dbb6a23a8b3f\") " pod="openstack/root-account-create-update-d92d8" Mar 20 15:59:16 crc kubenswrapper[4730]: I0320 15:59:16.456522 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb89m\" (UniqueName: \"kubernetes.io/projected/ed167127-4e44-4877-bf9b-dbb6a23a8b3f-kube-api-access-pb89m\") pod \"root-account-create-update-d92d8\" (UID: \"ed167127-4e44-4877-bf9b-dbb6a23a8b3f\") " pod="openstack/root-account-create-update-d92d8" Mar 20 15:59:16 crc kubenswrapper[4730]: I0320 15:59:16.456879 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed167127-4e44-4877-bf9b-dbb6a23a8b3f-operator-scripts\") pod \"root-account-create-update-d92d8\" (UID: \"ed167127-4e44-4877-bf9b-dbb6a23a8b3f\") " pod="openstack/root-account-create-update-d92d8" Mar 20 15:59:16 crc kubenswrapper[4730]: I0320 15:59:16.483406 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb89m\" (UniqueName: \"kubernetes.io/projected/ed167127-4e44-4877-bf9b-dbb6a23a8b3f-kube-api-access-pb89m\") pod \"root-account-create-update-d92d8\" (UID: \"ed167127-4e44-4877-bf9b-dbb6a23a8b3f\") " pod="openstack/root-account-create-update-d92d8" Mar 20 15:59:16 crc kubenswrapper[4730]: I0320 15:59:16.566287 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d92d8" Mar 20 15:59:16 crc kubenswrapper[4730]: I0320 15:59:16.724615 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:16 crc kubenswrapper[4730]: I0320 15:59:16.755613 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:16 crc kubenswrapper[4730]: I0320 15:59:16.827518 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:17 crc kubenswrapper[4730]: I0320 15:59:17.015197 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-d92d8"] Mar 20 15:59:17 crc kubenswrapper[4730]: I0320 15:59:17.835277 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d92d8" event={"ID":"ed167127-4e44-4877-bf9b-dbb6a23a8b3f","Type":"ContainerStarted","Data":"6f4ac67e084527a1cb38bd3c525c24e61f9c884d533f00e2d72554c431fbb247"} Mar 20 15:59:17 crc kubenswrapper[4730]: I0320 15:59:17.835564 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d92d8" event={"ID":"ed167127-4e44-4877-bf9b-dbb6a23a8b3f","Type":"ContainerStarted","Data":"bdc93b2eae763d308256566d1dc3d671bf388b3fb090367832183f2f953d3839"} Mar 20 15:59:17 crc kubenswrapper[4730]: I0320 15:59:17.855010 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-d92d8" podStartSLOduration=1.854992335 podStartE2EDuration="1.854992335s" podCreationTimestamp="2026-03-20 15:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:59:17.847223534 +0000 UTC m=+1217.060594923" watchObservedRunningTime="2026-03-20 15:59:17.854992335 +0000 UTC m=+1217.068363714" Mar 20 15:59:18 crc kubenswrapper[4730]: I0320 15:59:18.843334 4730 generic.go:334] "Generic (PLEG): container finished" podID="ed167127-4e44-4877-bf9b-dbb6a23a8b3f" containerID="6f4ac67e084527a1cb38bd3c525c24e61f9c884d533f00e2d72554c431fbb247" exitCode=0 Mar 20 15:59:18 crc kubenswrapper[4730]: I0320 15:59:18.843515 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d92d8" event={"ID":"ed167127-4e44-4877-bf9b-dbb6a23a8b3f","Type":"ContainerDied","Data":"6f4ac67e084527a1cb38bd3c525c24e61f9c884d533f00e2d72554c431fbb247"} Mar 20 15:59:19 crc kubenswrapper[4730]: I0320 15:59:19.141802 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gtrnp" podUID="31651551-edb9-4793-a752-39fa60a85ee3" containerName="ovn-controller" probeResult="failure" output=< Mar 20 15:59:19 crc kubenswrapper[4730]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 20 15:59:19 crc kubenswrapper[4730]: > Mar 20 15:59:19 crc kubenswrapper[4730]: I0320 15:59:19.367339 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 15:59:19 crc kubenswrapper[4730]: I0320 15:59:19.367616 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="fdd3845a-3723-438f-aa58-606451baed6c" containerName="prometheus" containerID="cri-o://ccc4d976b4160ab2263002a763830d5f5f68919c64d310c4f41b79be9631a6ea" gracePeriod=600 Mar 20 15:59:19 crc kubenswrapper[4730]: I0320 15:59:19.367679 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="fdd3845a-3723-438f-aa58-606451baed6c" containerName="thanos-sidecar" containerID="cri-o://83d469de2778c173f20bbd31bfd4fc16492ac416000fbb48eb639e2a00a91feb" gracePeriod=600 Mar 20 15:59:19 crc kubenswrapper[4730]: I0320 15:59:19.367771 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="fdd3845a-3723-438f-aa58-606451baed6c" containerName="config-reloader" containerID="cri-o://2e07902c18ce4590f197ddee6088f8273a8da0ef0fa191f28dc4116786b4c25f" gracePeriod=600 Mar 20 15:59:19 crc kubenswrapper[4730]: I0320 15:59:19.869052 4730 generic.go:334] "Generic (PLEG): container finished" podID="fdd3845a-3723-438f-aa58-606451baed6c" containerID="83d469de2778c173f20bbd31bfd4fc16492ac416000fbb48eb639e2a00a91feb" exitCode=0 Mar 20 15:59:19 crc kubenswrapper[4730]: I0320 15:59:19.869084 4730 generic.go:334] "Generic (PLEG): container finished" podID="fdd3845a-3723-438f-aa58-606451baed6c" containerID="2e07902c18ce4590f197ddee6088f8273a8da0ef0fa191f28dc4116786b4c25f" exitCode=0 Mar 20 15:59:19 crc kubenswrapper[4730]: I0320 15:59:19.869096 4730 generic.go:334] "Generic (PLEG): container finished" podID="fdd3845a-3723-438f-aa58-606451baed6c" containerID="ccc4d976b4160ab2263002a763830d5f5f68919c64d310c4f41b79be9631a6ea" exitCode=0 Mar 20 15:59:19 crc kubenswrapper[4730]: I0320 15:59:19.869179 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdd3845a-3723-438f-aa58-606451baed6c","Type":"ContainerDied","Data":"83d469de2778c173f20bbd31bfd4fc16492ac416000fbb48eb639e2a00a91feb"} Mar 20 15:59:19 crc kubenswrapper[4730]: I0320 15:59:19.869213 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdd3845a-3723-438f-aa58-606451baed6c","Type":"ContainerDied","Data":"2e07902c18ce4590f197ddee6088f8273a8da0ef0fa191f28dc4116786b4c25f"} Mar 20 15:59:19 crc kubenswrapper[4730]: I0320 15:59:19.869230 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdd3845a-3723-438f-aa58-606451baed6c","Type":"ContainerDied","Data":"ccc4d976b4160ab2263002a763830d5f5f68919c64d310c4f41b79be9631a6ea"} Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.064548 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.123611 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fdd3845a-3723-438f-aa58-606451baed6c-web-config\") pod \"fdd3845a-3723-438f-aa58-606451baed6c\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.123675 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fdd3845a-3723-438f-aa58-606451baed6c-config\") pod \"fdd3845a-3723-438f-aa58-606451baed6c\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.123707 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fdd3845a-3723-438f-aa58-606451baed6c-thanos-prometheus-http-client-file\") pod \"fdd3845a-3723-438f-aa58-606451baed6c\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.123754 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fdd3845a-3723-438f-aa58-606451baed6c-prometheus-metric-storage-rulefiles-1\") pod \"fdd3845a-3723-438f-aa58-606451baed6c\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.123823 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fdd3845a-3723-438f-aa58-606451baed6c-tls-assets\") pod \"fdd3845a-3723-438f-aa58-606451baed6c\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.123908 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxrks\" (UniqueName: \"kubernetes.io/projected/fdd3845a-3723-438f-aa58-606451baed6c-kube-api-access-pxrks\") pod \"fdd3845a-3723-438f-aa58-606451baed6c\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.124049 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\") pod \"fdd3845a-3723-438f-aa58-606451baed6c\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.125085 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fdd3845a-3723-438f-aa58-606451baed6c-prometheus-metric-storage-rulefiles-2\") pod \"fdd3845a-3723-438f-aa58-606451baed6c\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.125185 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fdd3845a-3723-438f-aa58-606451baed6c-config-out\") pod \"fdd3845a-3723-438f-aa58-606451baed6c\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.125263 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fdd3845a-3723-438f-aa58-606451baed6c-prometheus-metric-storage-rulefiles-0\") pod \"fdd3845a-3723-438f-aa58-606451baed6c\" (UID: \"fdd3845a-3723-438f-aa58-606451baed6c\") " Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.133378 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdd3845a-3723-438f-aa58-606451baed6c-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "fdd3845a-3723-438f-aa58-606451baed6c" (UID: "fdd3845a-3723-438f-aa58-606451baed6c"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.133754 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdd3845a-3723-438f-aa58-606451baed6c-kube-api-access-pxrks" (OuterVolumeSpecName: "kube-api-access-pxrks") pod "fdd3845a-3723-438f-aa58-606451baed6c" (UID: "fdd3845a-3723-438f-aa58-606451baed6c"). InnerVolumeSpecName "kube-api-access-pxrks". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.135862 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdd3845a-3723-438f-aa58-606451baed6c-config" (OuterVolumeSpecName: "config") pod "fdd3845a-3723-438f-aa58-606451baed6c" (UID: "fdd3845a-3723-438f-aa58-606451baed6c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.135959 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdd3845a-3723-438f-aa58-606451baed6c-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "fdd3845a-3723-438f-aa58-606451baed6c" (UID: "fdd3845a-3723-438f-aa58-606451baed6c"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.136065 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdd3845a-3723-438f-aa58-606451baed6c-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "fdd3845a-3723-438f-aa58-606451baed6c" (UID: "fdd3845a-3723-438f-aa58-606451baed6c"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.136467 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdd3845a-3723-438f-aa58-606451baed6c-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "fdd3845a-3723-438f-aa58-606451baed6c" (UID: "fdd3845a-3723-438f-aa58-606451baed6c"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.137577 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdd3845a-3723-438f-aa58-606451baed6c-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "fdd3845a-3723-438f-aa58-606451baed6c" (UID: "fdd3845a-3723-438f-aa58-606451baed6c"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.140800 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdd3845a-3723-438f-aa58-606451baed6c-config-out" (OuterVolumeSpecName: "config-out") pod "fdd3845a-3723-438f-aa58-606451baed6c" (UID: "fdd3845a-3723-438f-aa58-606451baed6c"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.159328 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdd3845a-3723-438f-aa58-606451baed6c-web-config" (OuterVolumeSpecName: "web-config") pod "fdd3845a-3723-438f-aa58-606451baed6c" (UID: "fdd3845a-3723-438f-aa58-606451baed6c"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.160078 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "fdd3845a-3723-438f-aa58-606451baed6c" (UID: "fdd3845a-3723-438f-aa58-606451baed6c"). InnerVolumeSpecName "pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.161004 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d92d8" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.235056 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed167127-4e44-4877-bf9b-dbb6a23a8b3f-operator-scripts\") pod \"ed167127-4e44-4877-bf9b-dbb6a23a8b3f\" (UID: \"ed167127-4e44-4877-bf9b-dbb6a23a8b3f\") " Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.235460 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pb89m\" (UniqueName: \"kubernetes.io/projected/ed167127-4e44-4877-bf9b-dbb6a23a8b3f-kube-api-access-pb89m\") pod \"ed167127-4e44-4877-bf9b-dbb6a23a8b3f\" (UID: \"ed167127-4e44-4877-bf9b-dbb6a23a8b3f\") " Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.235803 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed167127-4e44-4877-bf9b-dbb6a23a8b3f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed167127-4e44-4877-bf9b-dbb6a23a8b3f" (UID: "ed167127-4e44-4877-bf9b-dbb6a23a8b3f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.235932 4730 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fdd3845a-3723-438f-aa58-606451baed6c-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.236006 4730 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fdd3845a-3723-438f-aa58-606451baed6c-web-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.236065 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fdd3845a-3723-438f-aa58-606451baed6c-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.236132 4730 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fdd3845a-3723-438f-aa58-606451baed6c-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.236195 4730 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fdd3845a-3723-438f-aa58-606451baed6c-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.236267 4730 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fdd3845a-3723-438f-aa58-606451baed6c-tls-assets\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.236327 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxrks\" (UniqueName: \"kubernetes.io/projected/fdd3845a-3723-438f-aa58-606451baed6c-kube-api-access-pxrks\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.236433 4730 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\") on node \"crc\" " Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.236501 4730 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fdd3845a-3723-438f-aa58-606451baed6c-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.236569 4730 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fdd3845a-3723-438f-aa58-606451baed6c-config-out\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.239833 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed167127-4e44-4877-bf9b-dbb6a23a8b3f-kube-api-access-pb89m" (OuterVolumeSpecName: "kube-api-access-pb89m") pod "ed167127-4e44-4877-bf9b-dbb6a23a8b3f" (UID: "ed167127-4e44-4877-bf9b-dbb6a23a8b3f"). InnerVolumeSpecName "kube-api-access-pb89m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.262930 4730 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.263091 4730 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d") on node "crc" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.338226 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed167127-4e44-4877-bf9b-dbb6a23a8b3f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.338564 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pb89m\" (UniqueName: \"kubernetes.io/projected/ed167127-4e44-4877-bf9b-dbb6a23a8b3f-kube-api-access-pb89m\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.338577 4730 reconciler_common.go:293] "Volume detached for volume \"pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.894077 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdd3845a-3723-438f-aa58-606451baed6c","Type":"ContainerDied","Data":"1e180be81f67edf4314b8f53e9215c19609d113f31f1dc11bb8e1b622d9ae961"} Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.894172 4730 scope.go:117] "RemoveContainer" containerID="83d469de2778c173f20bbd31bfd4fc16492ac416000fbb48eb639e2a00a91feb" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.894812 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.895964 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d92d8" event={"ID":"ed167127-4e44-4877-bf9b-dbb6a23a8b3f","Type":"ContainerDied","Data":"bdc93b2eae763d308256566d1dc3d671bf388b3fb090367832183f2f953d3839"} Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.896006 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdc93b2eae763d308256566d1dc3d671bf388b3fb090367832183f2f953d3839" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.896015 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d92d8" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.929779 4730 scope.go:117] "RemoveContainer" containerID="2e07902c18ce4590f197ddee6088f8273a8da0ef0fa191f28dc4116786b4c25f" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.946537 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.952069 4730 scope.go:117] "RemoveContainer" containerID="ccc4d976b4160ab2263002a763830d5f5f68919c64d310c4f41b79be9631a6ea" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.955981 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.973180 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 15:59:20 crc kubenswrapper[4730]: E0320 15:59:20.973525 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd3845a-3723-438f-aa58-606451baed6c" containerName="prometheus" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.973543 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd3845a-3723-438f-aa58-606451baed6c" containerName="prometheus" Mar 20 15:59:20 crc kubenswrapper[4730]: E0320 15:59:20.973559 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd3845a-3723-438f-aa58-606451baed6c" containerName="init-config-reloader" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.973565 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd3845a-3723-438f-aa58-606451baed6c" containerName="init-config-reloader" Mar 20 15:59:20 crc kubenswrapper[4730]: E0320 15:59:20.973577 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd3845a-3723-438f-aa58-606451baed6c" containerName="thanos-sidecar" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.973583 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd3845a-3723-438f-aa58-606451baed6c" containerName="thanos-sidecar" Mar 20 15:59:20 crc kubenswrapper[4730]: E0320 15:59:20.973596 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd3845a-3723-438f-aa58-606451baed6c" containerName="config-reloader" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.973602 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd3845a-3723-438f-aa58-606451baed6c" containerName="config-reloader" Mar 20 15:59:20 crc kubenswrapper[4730]: E0320 15:59:20.973612 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed167127-4e44-4877-bf9b-dbb6a23a8b3f" containerName="mariadb-account-create-update" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.973617 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed167127-4e44-4877-bf9b-dbb6a23a8b3f" containerName="mariadb-account-create-update" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.973787 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed167127-4e44-4877-bf9b-dbb6a23a8b3f" containerName="mariadb-account-create-update" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.973802 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdd3845a-3723-438f-aa58-606451baed6c" containerName="config-reloader" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.973811 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdd3845a-3723-438f-aa58-606451baed6c" containerName="thanos-sidecar" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.973850 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdd3845a-3723-438f-aa58-606451baed6c" containerName="prometheus" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.975268 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.978513 4730 scope.go:117] "RemoveContainer" containerID="e4820a88fffd97776afd3e8f20ce1473d0c4e99acb7cc9e56c9e53eaef07563a" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.978715 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.978805 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.980159 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.980364 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.980529 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.980695 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.980887 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.980941 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-2q5k6" Mar 20 15:59:20 crc kubenswrapper[4730]: I0320 15:59:20.990058 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.006202 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.049561 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/b9474555-d03c-4f34-8914-15b7654ec76e-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.049621 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-config\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.049669 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.049745 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.049778 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b9474555-d03c-4f34-8914-15b7654ec76e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.049811 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.049860 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/b9474555-d03c-4f34-8914-15b7654ec76e-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.049886 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.049928 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.049987 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55vpw\" (UniqueName: \"kubernetes.io/projected/b9474555-d03c-4f34-8914-15b7654ec76e-kube-api-access-55vpw\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.050019 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.050046 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b9474555-d03c-4f34-8914-15b7654ec76e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.050073 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b9474555-d03c-4f34-8914-15b7654ec76e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.151516 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/b9474555-d03c-4f34-8914-15b7654ec76e-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.152218 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/b9474555-d03c-4f34-8914-15b7654ec76e-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.152332 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.152941 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.152994 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55vpw\" (UniqueName: \"kubernetes.io/projected/b9474555-d03c-4f34-8914-15b7654ec76e-kube-api-access-55vpw\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.153022 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.153043 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b9474555-d03c-4f34-8914-15b7654ec76e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.153062 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b9474555-d03c-4f34-8914-15b7654ec76e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.153085 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/b9474555-d03c-4f34-8914-15b7654ec76e-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.153113 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-config\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.153148 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.153188 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.153207 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b9474555-d03c-4f34-8914-15b7654ec76e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.153281 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.153691 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b9474555-d03c-4f34-8914-15b7654ec76e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.156908 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.157031 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/b9474555-d03c-4f34-8914-15b7654ec76e-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.161279 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.162121 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.162237 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-config\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.162350 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b9474555-d03c-4f34-8914-15b7654ec76e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.162545 4730 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.162571 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6c50c5c57c27fdb24da1fcbf3a7504c7bda45f4dc15a5678e0deb708aa433733/globalmount\"" pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.162777 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.172404 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55vpw\" (UniqueName: \"kubernetes.io/projected/b9474555-d03c-4f34-8914-15b7654ec76e-kube-api-access-55vpw\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.172862 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.173325 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b9474555-d03c-4f34-8914-15b7654ec76e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.194183 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\") pod \"prometheus-metric-storage-0\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.349989 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:21 crc kubenswrapper[4730]: I0320 15:59:21.543180 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdd3845a-3723-438f-aa58-606451baed6c" path="/var/lib/kubelet/pods/fdd3845a-3723-438f-aa58-606451baed6c/volumes" Mar 20 15:59:22 crc kubenswrapper[4730]: I0320 15:59:21.811959 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 15:59:22 crc kubenswrapper[4730]: I0320 15:59:21.910771 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b9474555-d03c-4f34-8914-15b7654ec76e","Type":"ContainerStarted","Data":"8f7081ac79f5f8ab5d11083740ef5a60bf4e5c0ec09313ba816c9290b7a2077b"} Mar 20 15:59:22 crc kubenswrapper[4730]: I0320 15:59:21.915656 4730 generic.go:334] "Generic (PLEG): container finished" podID="167282ce-29fc-44db-9b0b-baf2c956f433" containerID="768be9518c03c37026c048245f752f8e9492e3f207a6cde3432392aa9859edc5" exitCode=0 Mar 20 15:59:22 crc kubenswrapper[4730]: I0320 15:59:21.915694 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7d8lv" event={"ID":"167282ce-29fc-44db-9b0b-baf2c956f433","Type":"ContainerDied","Data":"768be9518c03c37026c048245f752f8e9492e3f207a6cde3432392aa9859edc5"} Mar 20 15:59:22 crc kubenswrapper[4730]: I0320 15:59:22.923988 4730 generic.go:334] "Generic (PLEG): container finished" podID="37dd8777-c196-4db2-af7a-5560a939e02c" containerID="45a09c4f4bffe31b4f9cf83737f4a3331b9ba65b3e4bbf1a00d15070f2dd1fbb" exitCode=0 Mar 20 15:59:22 crc kubenswrapper[4730]: I0320 15:59:22.924118 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mkvv4" event={"ID":"37dd8777-c196-4db2-af7a-5560a939e02c","Type":"ContainerDied","Data":"45a09c4f4bffe31b4f9cf83737f4a3331b9ba65b3e4bbf1a00d15070f2dd1fbb"} Mar 20 15:59:23 crc kubenswrapper[4730]: I0320 15:59:23.271995 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7d8lv" Mar 20 15:59:23 crc kubenswrapper[4730]: I0320 15:59:23.388530 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/167282ce-29fc-44db-9b0b-baf2c956f433-ring-data-devices\") pod \"167282ce-29fc-44db-9b0b-baf2c956f433\" (UID: \"167282ce-29fc-44db-9b0b-baf2c956f433\") " Mar 20 15:59:23 crc kubenswrapper[4730]: I0320 15:59:23.388628 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/167282ce-29fc-44db-9b0b-baf2c956f433-scripts\") pod \"167282ce-29fc-44db-9b0b-baf2c956f433\" (UID: \"167282ce-29fc-44db-9b0b-baf2c956f433\") " Mar 20 15:59:23 crc kubenswrapper[4730]: I0320 15:59:23.388659 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/167282ce-29fc-44db-9b0b-baf2c956f433-dispersionconf\") pod \"167282ce-29fc-44db-9b0b-baf2c956f433\" (UID: \"167282ce-29fc-44db-9b0b-baf2c956f433\") " Mar 20 15:59:23 crc kubenswrapper[4730]: I0320 15:59:23.388723 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzrnr\" (UniqueName: \"kubernetes.io/projected/167282ce-29fc-44db-9b0b-baf2c956f433-kube-api-access-dzrnr\") pod \"167282ce-29fc-44db-9b0b-baf2c956f433\" (UID: \"167282ce-29fc-44db-9b0b-baf2c956f433\") " Mar 20 15:59:23 crc kubenswrapper[4730]: I0320 15:59:23.388750 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/167282ce-29fc-44db-9b0b-baf2c956f433-etc-swift\") pod \"167282ce-29fc-44db-9b0b-baf2c956f433\" (UID: \"167282ce-29fc-44db-9b0b-baf2c956f433\") " Mar 20 15:59:23 crc kubenswrapper[4730]: I0320 15:59:23.388776 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/167282ce-29fc-44db-9b0b-baf2c956f433-combined-ca-bundle\") pod \"167282ce-29fc-44db-9b0b-baf2c956f433\" (UID: \"167282ce-29fc-44db-9b0b-baf2c956f433\") " Mar 20 15:59:23 crc kubenswrapper[4730]: I0320 15:59:23.388837 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/167282ce-29fc-44db-9b0b-baf2c956f433-swiftconf\") pod \"167282ce-29fc-44db-9b0b-baf2c956f433\" (UID: \"167282ce-29fc-44db-9b0b-baf2c956f433\") " Mar 20 15:59:23 crc kubenswrapper[4730]: I0320 15:59:23.389736 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/167282ce-29fc-44db-9b0b-baf2c956f433-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "167282ce-29fc-44db-9b0b-baf2c956f433" (UID: "167282ce-29fc-44db-9b0b-baf2c956f433"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:59:23 crc kubenswrapper[4730]: I0320 15:59:23.389844 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/167282ce-29fc-44db-9b0b-baf2c956f433-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "167282ce-29fc-44db-9b0b-baf2c956f433" (UID: "167282ce-29fc-44db-9b0b-baf2c956f433"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 15:59:23 crc kubenswrapper[4730]: I0320 15:59:23.396011 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/167282ce-29fc-44db-9b0b-baf2c956f433-kube-api-access-dzrnr" (OuterVolumeSpecName: "kube-api-access-dzrnr") pod "167282ce-29fc-44db-9b0b-baf2c956f433" (UID: "167282ce-29fc-44db-9b0b-baf2c956f433"). InnerVolumeSpecName "kube-api-access-dzrnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:59:23 crc kubenswrapper[4730]: I0320 15:59:23.401184 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/167282ce-29fc-44db-9b0b-baf2c956f433-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "167282ce-29fc-44db-9b0b-baf2c956f433" (UID: "167282ce-29fc-44db-9b0b-baf2c956f433"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:59:23 crc kubenswrapper[4730]: I0320 15:59:23.490941 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzrnr\" (UniqueName: \"kubernetes.io/projected/167282ce-29fc-44db-9b0b-baf2c956f433-kube-api-access-dzrnr\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:23 crc kubenswrapper[4730]: I0320 15:59:23.490977 4730 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/167282ce-29fc-44db-9b0b-baf2c956f433-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:23 crc kubenswrapper[4730]: I0320 15:59:23.490986 4730 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/167282ce-29fc-44db-9b0b-baf2c956f433-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:23 crc kubenswrapper[4730]: I0320 15:59:23.490995 4730 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/167282ce-29fc-44db-9b0b-baf2c956f433-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:23 crc kubenswrapper[4730]: I0320 15:59:23.517163 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/167282ce-29fc-44db-9b0b-baf2c956f433-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "167282ce-29fc-44db-9b0b-baf2c956f433" (UID: "167282ce-29fc-44db-9b0b-baf2c956f433"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:59:23 crc kubenswrapper[4730]: I0320 15:59:23.518352 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/167282ce-29fc-44db-9b0b-baf2c956f433-scripts" (OuterVolumeSpecName: "scripts") pod "167282ce-29fc-44db-9b0b-baf2c956f433" (UID: "167282ce-29fc-44db-9b0b-baf2c956f433"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:59:23 crc kubenswrapper[4730]: I0320 15:59:23.525769 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/167282ce-29fc-44db-9b0b-baf2c956f433-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "167282ce-29fc-44db-9b0b-baf2c956f433" (UID: "167282ce-29fc-44db-9b0b-baf2c956f433"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:59:23 crc kubenswrapper[4730]: I0320 15:59:23.593618 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/167282ce-29fc-44db-9b0b-baf2c956f433-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:23 crc kubenswrapper[4730]: I0320 15:59:23.593662 4730 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/167282ce-29fc-44db-9b0b-baf2c956f433-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:23 crc kubenswrapper[4730]: I0320 15:59:23.593671 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/167282ce-29fc-44db-9b0b-baf2c956f433-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:23 crc kubenswrapper[4730]: I0320 15:59:23.933892 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7d8lv" event={"ID":"167282ce-29fc-44db-9b0b-baf2c956f433","Type":"ContainerDied","Data":"c2bed297b15aeb66e5504095b19b123685071c689e915cfbc2281dd9c7ff81a2"} Mar 20 15:59:23 crc kubenswrapper[4730]: I0320 15:59:23.933967 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2bed297b15aeb66e5504095b19b123685071c689e915cfbc2281dd9c7ff81a2" Mar 20 15:59:23 crc kubenswrapper[4730]: I0320 15:59:23.933906 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7d8lv" Mar 20 15:59:24 crc kubenswrapper[4730]: I0320 15:59:24.341579 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gtrnp" podUID="31651551-edb9-4793-a752-39fa60a85ee3" containerName="ovn-controller" probeResult="failure" output=< Mar 20 15:59:24 crc kubenswrapper[4730]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 20 15:59:24 crc kubenswrapper[4730]: > Mar 20 15:59:24 crc kubenswrapper[4730]: I0320 15:59:24.720755 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mkvv4" Mar 20 15:59:24 crc kubenswrapper[4730]: I0320 15:59:24.810508 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dd8777-c196-4db2-af7a-5560a939e02c-config-data\") pod \"37dd8777-c196-4db2-af7a-5560a939e02c\" (UID: \"37dd8777-c196-4db2-af7a-5560a939e02c\") " Mar 20 15:59:24 crc kubenswrapper[4730]: I0320 15:59:24.810580 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/37dd8777-c196-4db2-af7a-5560a939e02c-db-sync-config-data\") pod \"37dd8777-c196-4db2-af7a-5560a939e02c\" (UID: \"37dd8777-c196-4db2-af7a-5560a939e02c\") " Mar 20 15:59:24 crc kubenswrapper[4730]: I0320 15:59:24.810643 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dd8777-c196-4db2-af7a-5560a939e02c-combined-ca-bundle\") pod \"37dd8777-c196-4db2-af7a-5560a939e02c\" (UID: \"37dd8777-c196-4db2-af7a-5560a939e02c\") " Mar 20 15:59:24 crc kubenswrapper[4730]: I0320 15:59:24.810800 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwpz2\" (UniqueName: \"kubernetes.io/projected/37dd8777-c196-4db2-af7a-5560a939e02c-kube-api-access-wwpz2\") pod \"37dd8777-c196-4db2-af7a-5560a939e02c\" (UID: \"37dd8777-c196-4db2-af7a-5560a939e02c\") " Mar 20 15:59:24 crc kubenswrapper[4730]: I0320 15:59:24.817617 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37dd8777-c196-4db2-af7a-5560a939e02c-kube-api-access-wwpz2" (OuterVolumeSpecName: "kube-api-access-wwpz2") pod "37dd8777-c196-4db2-af7a-5560a939e02c" (UID: "37dd8777-c196-4db2-af7a-5560a939e02c"). InnerVolumeSpecName "kube-api-access-wwpz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:59:24 crc kubenswrapper[4730]: I0320 15:59:24.827016 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37dd8777-c196-4db2-af7a-5560a939e02c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "37dd8777-c196-4db2-af7a-5560a939e02c" (UID: "37dd8777-c196-4db2-af7a-5560a939e02c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:59:24 crc kubenswrapper[4730]: I0320 15:59:24.840127 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37dd8777-c196-4db2-af7a-5560a939e02c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37dd8777-c196-4db2-af7a-5560a939e02c" (UID: "37dd8777-c196-4db2-af7a-5560a939e02c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:59:24 crc kubenswrapper[4730]: I0320 15:59:24.860820 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37dd8777-c196-4db2-af7a-5560a939e02c-config-data" (OuterVolumeSpecName: "config-data") pod "37dd8777-c196-4db2-af7a-5560a939e02c" (UID: "37dd8777-c196-4db2-af7a-5560a939e02c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:59:24 crc kubenswrapper[4730]: I0320 15:59:24.912644 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dd8777-c196-4db2-af7a-5560a939e02c-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:24 crc kubenswrapper[4730]: I0320 15:59:24.912681 4730 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/37dd8777-c196-4db2-af7a-5560a939e02c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:24 crc kubenswrapper[4730]: I0320 15:59:24.912694 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dd8777-c196-4db2-af7a-5560a939e02c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:24 crc kubenswrapper[4730]: I0320 15:59:24.912708 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwpz2\" (UniqueName: \"kubernetes.io/projected/37dd8777-c196-4db2-af7a-5560a939e02c-kube-api-access-wwpz2\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:24 crc kubenswrapper[4730]: I0320 15:59:24.941715 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mkvv4" event={"ID":"37dd8777-c196-4db2-af7a-5560a939e02c","Type":"ContainerDied","Data":"83ca9233a49380ba616e8a33edba2e21a0e34d304c17191d6b978242a55feaa7"} Mar 20 15:59:24 crc kubenswrapper[4730]: I0320 15:59:24.941752 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83ca9233a49380ba616e8a33edba2e21a0e34d304c17191d6b978242a55feaa7" Mar 20 15:59:24 crc kubenswrapper[4730]: I0320 15:59:24.941804 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mkvv4" Mar 20 15:59:24 crc kubenswrapper[4730]: I0320 15:59:24.951579 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b9474555-d03c-4f34-8914-15b7654ec76e","Type":"ContainerStarted","Data":"c6f36cf8613ae0c9fc8870f685a85cb84e12a16bcd52f187f659c8895c86bf85"} Mar 20 15:59:25 crc kubenswrapper[4730]: I0320 15:59:25.319397 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68f7d4448c-cvqk4"] Mar 20 15:59:25 crc kubenswrapper[4730]: E0320 15:59:25.320001 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37dd8777-c196-4db2-af7a-5560a939e02c" containerName="glance-db-sync" Mar 20 15:59:25 crc kubenswrapper[4730]: I0320 15:59:25.320023 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="37dd8777-c196-4db2-af7a-5560a939e02c" containerName="glance-db-sync" Mar 20 15:59:25 crc kubenswrapper[4730]: E0320 15:59:25.320033 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="167282ce-29fc-44db-9b0b-baf2c956f433" containerName="swift-ring-rebalance" Mar 20 15:59:25 crc kubenswrapper[4730]: I0320 15:59:25.320040 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="167282ce-29fc-44db-9b0b-baf2c956f433" containerName="swift-ring-rebalance" Mar 20 15:59:25 crc kubenswrapper[4730]: I0320 15:59:25.320187 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="167282ce-29fc-44db-9b0b-baf2c956f433" containerName="swift-ring-rebalance" Mar 20 15:59:25 crc kubenswrapper[4730]: I0320 15:59:25.320203 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="37dd8777-c196-4db2-af7a-5560a939e02c" containerName="glance-db-sync" Mar 20 15:59:25 crc kubenswrapper[4730]: I0320 15:59:25.320979 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" Mar 20 15:59:25 crc kubenswrapper[4730]: I0320 15:59:25.343220 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68f7d4448c-cvqk4"] Mar 20 15:59:25 crc kubenswrapper[4730]: I0320 15:59:25.424894 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-config\") pod \"dnsmasq-dns-68f7d4448c-cvqk4\" (UID: \"3ae0793d-af8a-4808-b632-9f8b22a4d0c0\") " pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" Mar 20 15:59:25 crc kubenswrapper[4730]: I0320 15:59:25.424973 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-dns-svc\") pod \"dnsmasq-dns-68f7d4448c-cvqk4\" (UID: \"3ae0793d-af8a-4808-b632-9f8b22a4d0c0\") " pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" Mar 20 15:59:25 crc kubenswrapper[4730]: I0320 15:59:25.425011 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hslv\" (UniqueName: \"kubernetes.io/projected/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-kube-api-access-9hslv\") pod \"dnsmasq-dns-68f7d4448c-cvqk4\" (UID: \"3ae0793d-af8a-4808-b632-9f8b22a4d0c0\") " pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" Mar 20 15:59:25 crc kubenswrapper[4730]: I0320 15:59:25.425035 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-ovsdbserver-sb\") pod \"dnsmasq-dns-68f7d4448c-cvqk4\" (UID: \"3ae0793d-af8a-4808-b632-9f8b22a4d0c0\") " pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" Mar 20 15:59:25 crc kubenswrapper[4730]: I0320 15:59:25.425146 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-ovsdbserver-nb\") pod \"dnsmasq-dns-68f7d4448c-cvqk4\" (UID: \"3ae0793d-af8a-4808-b632-9f8b22a4d0c0\") " pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" Mar 20 15:59:25 crc kubenswrapper[4730]: I0320 15:59:25.526951 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hslv\" (UniqueName: \"kubernetes.io/projected/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-kube-api-access-9hslv\") pod \"dnsmasq-dns-68f7d4448c-cvqk4\" (UID: \"3ae0793d-af8a-4808-b632-9f8b22a4d0c0\") " pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" Mar 20 15:59:25 crc kubenswrapper[4730]: I0320 15:59:25.526991 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-ovsdbserver-sb\") pod \"dnsmasq-dns-68f7d4448c-cvqk4\" (UID: \"3ae0793d-af8a-4808-b632-9f8b22a4d0c0\") " pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" Mar 20 15:59:25 crc kubenswrapper[4730]: I0320 15:59:25.527086 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-ovsdbserver-nb\") pod \"dnsmasq-dns-68f7d4448c-cvqk4\" (UID: \"3ae0793d-af8a-4808-b632-9f8b22a4d0c0\") " pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" Mar 20 15:59:25 crc kubenswrapper[4730]: I0320 15:59:25.527139 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-config\") pod \"dnsmasq-dns-68f7d4448c-cvqk4\" (UID: \"3ae0793d-af8a-4808-b632-9f8b22a4d0c0\") " pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" Mar 20 15:59:25 crc kubenswrapper[4730]: I0320 15:59:25.527177 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-dns-svc\") pod \"dnsmasq-dns-68f7d4448c-cvqk4\" (UID: \"3ae0793d-af8a-4808-b632-9f8b22a4d0c0\") " pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" Mar 20 15:59:25 crc kubenswrapper[4730]: I0320 15:59:25.528091 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-dns-svc\") pod \"dnsmasq-dns-68f7d4448c-cvqk4\" (UID: \"3ae0793d-af8a-4808-b632-9f8b22a4d0c0\") " pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" Mar 20 15:59:25 crc kubenswrapper[4730]: I0320 15:59:25.528104 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-ovsdbserver-nb\") pod \"dnsmasq-dns-68f7d4448c-cvqk4\" (UID: \"3ae0793d-af8a-4808-b632-9f8b22a4d0c0\") " pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" Mar 20 15:59:25 crc kubenswrapper[4730]: I0320 15:59:25.528688 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-config\") pod \"dnsmasq-dns-68f7d4448c-cvqk4\" (UID: \"3ae0793d-af8a-4808-b632-9f8b22a4d0c0\") " pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" Mar 20 15:59:25 crc kubenswrapper[4730]: I0320 15:59:25.528710 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-ovsdbserver-sb\") pod \"dnsmasq-dns-68f7d4448c-cvqk4\" (UID: \"3ae0793d-af8a-4808-b632-9f8b22a4d0c0\") " pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" Mar 20 15:59:25 crc kubenswrapper[4730]: I0320 15:59:25.547073 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hslv\" (UniqueName: \"kubernetes.io/projected/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-kube-api-access-9hslv\") pod \"dnsmasq-dns-68f7d4448c-cvqk4\" (UID: \"3ae0793d-af8a-4808-b632-9f8b22a4d0c0\") " pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" Mar 20 15:59:25 crc kubenswrapper[4730]: I0320 15:59:25.641645 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" Mar 20 15:59:26 crc kubenswrapper[4730]: I0320 15:59:26.086935 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68f7d4448c-cvqk4"] Mar 20 15:59:26 crc kubenswrapper[4730]: W0320 15:59:26.090420 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ae0793d_af8a_4808_b632_9f8b22a4d0c0.slice/crio-12c589b54c6d611757ef6559c9af4693c0c76b7c0e6f07d2b8ae5108a83f1489 WatchSource:0}: Error finding container 12c589b54c6d611757ef6559c9af4693c0c76b7c0e6f07d2b8ae5108a83f1489: Status 404 returned error can't find the container with id 12c589b54c6d611757ef6559c9af4693c0c76b7c0e6f07d2b8ae5108a83f1489 Mar 20 15:59:26 crc kubenswrapper[4730]: I0320 15:59:26.966999 4730 generic.go:334] "Generic (PLEG): container finished" podID="3ae0793d-af8a-4808-b632-9f8b22a4d0c0" containerID="ed9c122c62fe6334e758d54aed5e4b1a868adadd18c7ec06bf5ae96e604dfa76" exitCode=0 Mar 20 15:59:26 crc kubenswrapper[4730]: I0320 15:59:26.967143 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" event={"ID":"3ae0793d-af8a-4808-b632-9f8b22a4d0c0","Type":"ContainerDied","Data":"ed9c122c62fe6334e758d54aed5e4b1a868adadd18c7ec06bf5ae96e604dfa76"} Mar 20 15:59:26 crc kubenswrapper[4730]: I0320 15:59:26.967436 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" event={"ID":"3ae0793d-af8a-4808-b632-9f8b22a4d0c0","Type":"ContainerStarted","Data":"12c589b54c6d611757ef6559c9af4693c0c76b7c0e6f07d2b8ae5108a83f1489"} Mar 20 15:59:27 crc kubenswrapper[4730]: I0320 15:59:27.976790 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" event={"ID":"3ae0793d-af8a-4808-b632-9f8b22a4d0c0","Type":"ContainerStarted","Data":"cca89ebad14e458072774d3863e92e5ae8961f7af2fa95160875c04d8dd584a6"} Mar 20 15:59:27 crc kubenswrapper[4730]: I0320 15:59:27.977191 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" Mar 20 15:59:27 crc kubenswrapper[4730]: I0320 15:59:27.999686 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" podStartSLOduration=2.999666884 podStartE2EDuration="2.999666884s" podCreationTimestamp="2026-03-20 15:59:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:59:27.991221494 +0000 UTC m=+1227.204592863" watchObservedRunningTime="2026-03-20 15:59:27.999666884 +0000 UTC m=+1227.213038253" Mar 20 15:59:28 crc kubenswrapper[4730]: I0320 15:59:28.587895 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2c9def6e-27a0-4543-8d3c-07b3e4005b33-etc-swift\") pod \"swift-storage-0\" (UID: \"2c9def6e-27a0-4543-8d3c-07b3e4005b33\") " pod="openstack/swift-storage-0" Mar 20 15:59:28 crc kubenswrapper[4730]: I0320 15:59:28.593522 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2c9def6e-27a0-4543-8d3c-07b3e4005b33-etc-swift\") pod \"swift-storage-0\" (UID: \"2c9def6e-27a0-4543-8d3c-07b3e4005b33\") " pod="openstack/swift-storage-0" Mar 20 15:59:28 crc kubenswrapper[4730]: I0320 15:59:28.742163 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.132488 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gtrnp" podUID="31651551-edb9-4793-a752-39fa60a85ee3" containerName="ovn-controller" probeResult="failure" output=< Mar 20 15:59:29 crc kubenswrapper[4730]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 20 15:59:29 crc kubenswrapper[4730]: > Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.152223 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-cdd7f" Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.156419 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-cdd7f" Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.176266 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="dfd9111c-a9f4-4874-91fc-c0ef68ae09a3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.109:5671: connect: connection refused" Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.356008 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.382661 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gtrnp-config-kwcsn"] Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.383825 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gtrnp-config-kwcsn" Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.387723 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.412795 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4b45ada4-ae75-43d4-b589-0adca6844b9c-additional-scripts\") pod \"ovn-controller-gtrnp-config-kwcsn\" (UID: \"4b45ada4-ae75-43d4-b589-0adca6844b9c\") " pod="openstack/ovn-controller-gtrnp-config-kwcsn" Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.412839 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4b45ada4-ae75-43d4-b589-0adca6844b9c-scripts\") pod \"ovn-controller-gtrnp-config-kwcsn\" (UID: \"4b45ada4-ae75-43d4-b589-0adca6844b9c\") " pod="openstack/ovn-controller-gtrnp-config-kwcsn" Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.412869 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82mh6\" (UniqueName: \"kubernetes.io/projected/4b45ada4-ae75-43d4-b589-0adca6844b9c-kube-api-access-82mh6\") pod \"ovn-controller-gtrnp-config-kwcsn\" (UID: \"4b45ada4-ae75-43d4-b589-0adca6844b9c\") " pod="openstack/ovn-controller-gtrnp-config-kwcsn" Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.412926 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4b45ada4-ae75-43d4-b589-0adca6844b9c-var-run\") pod \"ovn-controller-gtrnp-config-kwcsn\" (UID: \"4b45ada4-ae75-43d4-b589-0adca6844b9c\") " pod="openstack/ovn-controller-gtrnp-config-kwcsn" Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.412951 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4b45ada4-ae75-43d4-b589-0adca6844b9c-var-run-ovn\") pod \"ovn-controller-gtrnp-config-kwcsn\" (UID: \"4b45ada4-ae75-43d4-b589-0adca6844b9c\") " pod="openstack/ovn-controller-gtrnp-config-kwcsn" Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.412997 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4b45ada4-ae75-43d4-b589-0adca6844b9c-var-log-ovn\") pod \"ovn-controller-gtrnp-config-kwcsn\" (UID: \"4b45ada4-ae75-43d4-b589-0adca6844b9c\") " pod="openstack/ovn-controller-gtrnp-config-kwcsn" Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.457999 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gtrnp-config-kwcsn"] Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.492423 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/notifications-rabbitmq-server-0" podUID="df9ca02d-e20f-4f55-ba14-92b91812afb6" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.110:5671: connect: connection refused" Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.514237 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82mh6\" (UniqueName: \"kubernetes.io/projected/4b45ada4-ae75-43d4-b589-0adca6844b9c-kube-api-access-82mh6\") pod \"ovn-controller-gtrnp-config-kwcsn\" (UID: \"4b45ada4-ae75-43d4-b589-0adca6844b9c\") " pod="openstack/ovn-controller-gtrnp-config-kwcsn" Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.514504 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4b45ada4-ae75-43d4-b589-0adca6844b9c-var-run\") pod \"ovn-controller-gtrnp-config-kwcsn\" (UID: \"4b45ada4-ae75-43d4-b589-0adca6844b9c\") " pod="openstack/ovn-controller-gtrnp-config-kwcsn" Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.514585 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4b45ada4-ae75-43d4-b589-0adca6844b9c-var-run-ovn\") pod \"ovn-controller-gtrnp-config-kwcsn\" (UID: \"4b45ada4-ae75-43d4-b589-0adca6844b9c\") " pod="openstack/ovn-controller-gtrnp-config-kwcsn" Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.514670 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4b45ada4-ae75-43d4-b589-0adca6844b9c-var-log-ovn\") pod \"ovn-controller-gtrnp-config-kwcsn\" (UID: \"4b45ada4-ae75-43d4-b589-0adca6844b9c\") " pod="openstack/ovn-controller-gtrnp-config-kwcsn" Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.514771 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4b45ada4-ae75-43d4-b589-0adca6844b9c-additional-scripts\") pod \"ovn-controller-gtrnp-config-kwcsn\" (UID: \"4b45ada4-ae75-43d4-b589-0adca6844b9c\") " pod="openstack/ovn-controller-gtrnp-config-kwcsn" Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.514834 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4b45ada4-ae75-43d4-b589-0adca6844b9c-scripts\") pod \"ovn-controller-gtrnp-config-kwcsn\" (UID: \"4b45ada4-ae75-43d4-b589-0adca6844b9c\") " pod="openstack/ovn-controller-gtrnp-config-kwcsn" Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.514984 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4b45ada4-ae75-43d4-b589-0adca6844b9c-var-run\") pod \"ovn-controller-gtrnp-config-kwcsn\" (UID: \"4b45ada4-ae75-43d4-b589-0adca6844b9c\") " pod="openstack/ovn-controller-gtrnp-config-kwcsn" Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.515097 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4b45ada4-ae75-43d4-b589-0adca6844b9c-var-log-ovn\") pod \"ovn-controller-gtrnp-config-kwcsn\" (UID: \"4b45ada4-ae75-43d4-b589-0adca6844b9c\") " pod="openstack/ovn-controller-gtrnp-config-kwcsn" Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.515136 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4b45ada4-ae75-43d4-b589-0adca6844b9c-var-run-ovn\") pod \"ovn-controller-gtrnp-config-kwcsn\" (UID: \"4b45ada4-ae75-43d4-b589-0adca6844b9c\") " pod="openstack/ovn-controller-gtrnp-config-kwcsn" Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.515788 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4b45ada4-ae75-43d4-b589-0adca6844b9c-additional-scripts\") pod \"ovn-controller-gtrnp-config-kwcsn\" (UID: \"4b45ada4-ae75-43d4-b589-0adca6844b9c\") " pod="openstack/ovn-controller-gtrnp-config-kwcsn" Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.516887 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4b45ada4-ae75-43d4-b589-0adca6844b9c-scripts\") pod \"ovn-controller-gtrnp-config-kwcsn\" (UID: \"4b45ada4-ae75-43d4-b589-0adca6844b9c\") " pod="openstack/ovn-controller-gtrnp-config-kwcsn" Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.535998 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82mh6\" (UniqueName: \"kubernetes.io/projected/4b45ada4-ae75-43d4-b589-0adca6844b9c-kube-api-access-82mh6\") pod \"ovn-controller-gtrnp-config-kwcsn\" (UID: \"4b45ada4-ae75-43d4-b589-0adca6844b9c\") " pod="openstack/ovn-controller-gtrnp-config-kwcsn" Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.719960 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gtrnp-config-kwcsn" Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.786242 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="8043f69c-832c-4afa-a9b9-211507664805" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.111:5671: connect: connection refused" Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.991787 4730 generic.go:334] "Generic (PLEG): container finished" podID="b9474555-d03c-4f34-8914-15b7654ec76e" containerID="c6f36cf8613ae0c9fc8870f685a85cb84e12a16bcd52f187f659c8895c86bf85" exitCode=0 Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.991869 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b9474555-d03c-4f34-8914-15b7654ec76e","Type":"ContainerDied","Data":"c6f36cf8613ae0c9fc8870f685a85cb84e12a16bcd52f187f659c8895c86bf85"} Mar 20 15:59:29 crc kubenswrapper[4730]: I0320 15:59:29.995047 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c9def6e-27a0-4543-8d3c-07b3e4005b33","Type":"ContainerStarted","Data":"e83fd0cac8ffe7d058e2c956c9899a6ff0ab5f5026ca0cc56a4a1252042611ea"} Mar 20 15:59:30 crc kubenswrapper[4730]: I0320 15:59:30.612333 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gtrnp-config-kwcsn"] Mar 20 15:59:30 crc kubenswrapper[4730]: W0320 15:59:30.620945 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b45ada4_ae75_43d4_b589_0adca6844b9c.slice/crio-7de87b0142ae0b273ea772f996c36fe710c098df6bc0dc5943f169f3431a89b9 WatchSource:0}: Error finding container 7de87b0142ae0b273ea772f996c36fe710c098df6bc0dc5943f169f3431a89b9: Status 404 returned error can't find the container with id 7de87b0142ae0b273ea772f996c36fe710c098df6bc0dc5943f169f3431a89b9 Mar 20 15:59:31 crc kubenswrapper[4730]: I0320 15:59:31.700913 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b9474555-d03c-4f34-8914-15b7654ec76e","Type":"ContainerStarted","Data":"8be7e9c76c5955e48d7fb228ba7f64fc7c79c7a945b0f730c1eb3ac871d2f2ce"} Mar 20 15:59:31 crc kubenswrapper[4730]: I0320 15:59:31.704760 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c9def6e-27a0-4543-8d3c-07b3e4005b33","Type":"ContainerStarted","Data":"43d0f186b9d31a94452b701ddd22187b3a7732670ec08b107b1eb534ab0742cc"} Mar 20 15:59:31 crc kubenswrapper[4730]: I0320 15:59:31.704810 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c9def6e-27a0-4543-8d3c-07b3e4005b33","Type":"ContainerStarted","Data":"ef7ca60bdf842c5c81f4b4cf24250a34944d74feea21474cb6a6eb996f1aaa1c"} Mar 20 15:59:31 crc kubenswrapper[4730]: I0320 15:59:31.704823 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c9def6e-27a0-4543-8d3c-07b3e4005b33","Type":"ContainerStarted","Data":"615e9add6050372e67d95a0542a96cc9945c562044da60e92bc777af8b054cce"} Mar 20 15:59:31 crc kubenswrapper[4730]: I0320 15:59:31.704834 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c9def6e-27a0-4543-8d3c-07b3e4005b33","Type":"ContainerStarted","Data":"0e133301e01f9e38eca7cfa40a22fdcab5a40fda8cabacecf46480ca86818f89"} Mar 20 15:59:31 crc kubenswrapper[4730]: I0320 15:59:31.706290 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gtrnp-config-kwcsn" event={"ID":"4b45ada4-ae75-43d4-b589-0adca6844b9c","Type":"ContainerStarted","Data":"7de87b0142ae0b273ea772f996c36fe710c098df6bc0dc5943f169f3431a89b9"} Mar 20 15:59:32 crc kubenswrapper[4730]: I0320 15:59:32.721870 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b9474555-d03c-4f34-8914-15b7654ec76e","Type":"ContainerStarted","Data":"bf089e92ae421a8920cefe87896cf3bb8f1ad22d0fc9bc224fb423d5346400e7"} Mar 20 15:59:32 crc kubenswrapper[4730]: I0320 15:59:32.722455 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b9474555-d03c-4f34-8914-15b7654ec76e","Type":"ContainerStarted","Data":"df8727da1c49b4db84c764524b2f4d737c0c03808dcebb6fb23caea5272a7aec"} Mar 20 15:59:32 crc kubenswrapper[4730]: I0320 15:59:32.727769 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c9def6e-27a0-4543-8d3c-07b3e4005b33","Type":"ContainerStarted","Data":"e5cd07648ca70e7df2f81765c08dbeec89730f1bbd8fc11d3f13a6d3af2301c3"} Mar 20 15:59:32 crc kubenswrapper[4730]: I0320 15:59:32.727812 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c9def6e-27a0-4543-8d3c-07b3e4005b33","Type":"ContainerStarted","Data":"52f2e369d5c1e38c4f44cbe410c75d284679eba0a6e2928d0a34a5744ca05c14"} Mar 20 15:59:32 crc kubenswrapper[4730]: I0320 15:59:32.730010 4730 generic.go:334] "Generic (PLEG): container finished" podID="4b45ada4-ae75-43d4-b589-0adca6844b9c" containerID="e7513ea86a1e88bc7e61a8263e52b8513c6fca0f458503f246a5451604a802da" exitCode=0 Mar 20 15:59:32 crc kubenswrapper[4730]: I0320 15:59:32.730066 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gtrnp-config-kwcsn" event={"ID":"4b45ada4-ae75-43d4-b589-0adca6844b9c","Type":"ContainerDied","Data":"e7513ea86a1e88bc7e61a8263e52b8513c6fca0f458503f246a5451604a802da"} Mar 20 15:59:32 crc kubenswrapper[4730]: I0320 15:59:32.762481 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=12.762464344 podStartE2EDuration="12.762464344s" podCreationTimestamp="2026-03-20 15:59:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:59:32.756094523 +0000 UTC m=+1231.969465912" watchObservedRunningTime="2026-03-20 15:59:32.762464344 +0000 UTC m=+1231.975835713" Mar 20 15:59:33 crc kubenswrapper[4730]: I0320 15:59:33.740640 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c9def6e-27a0-4543-8d3c-07b3e4005b33","Type":"ContainerStarted","Data":"d0d4d4543419128c3541b234a2fa4a59fecbe52f28ed83b1a77213a046cbd529"} Mar 20 15:59:33 crc kubenswrapper[4730]: I0320 15:59:33.740936 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c9def6e-27a0-4543-8d3c-07b3e4005b33","Type":"ContainerStarted","Data":"b29eb91143f55f74d98cef57d7abf643c81f5daa634b3f30490fa642bc7121e0"} Mar 20 15:59:34 crc kubenswrapper[4730]: I0320 15:59:34.106453 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gtrnp-config-kwcsn" Mar 20 15:59:34 crc kubenswrapper[4730]: I0320 15:59:34.149600 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-gtrnp" Mar 20 15:59:34 crc kubenswrapper[4730]: I0320 15:59:34.186050 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4b45ada4-ae75-43d4-b589-0adca6844b9c-var-run\") pod \"4b45ada4-ae75-43d4-b589-0adca6844b9c\" (UID: \"4b45ada4-ae75-43d4-b589-0adca6844b9c\") " Mar 20 15:59:34 crc kubenswrapper[4730]: I0320 15:59:34.186178 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4b45ada4-ae75-43d4-b589-0adca6844b9c-additional-scripts\") pod \"4b45ada4-ae75-43d4-b589-0adca6844b9c\" (UID: \"4b45ada4-ae75-43d4-b589-0adca6844b9c\") " Mar 20 15:59:34 crc kubenswrapper[4730]: I0320 15:59:34.186180 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b45ada4-ae75-43d4-b589-0adca6844b9c-var-run" (OuterVolumeSpecName: "var-run") pod "4b45ada4-ae75-43d4-b589-0adca6844b9c" (UID: "4b45ada4-ae75-43d4-b589-0adca6844b9c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:59:34 crc kubenswrapper[4730]: I0320 15:59:34.186216 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82mh6\" (UniqueName: \"kubernetes.io/projected/4b45ada4-ae75-43d4-b589-0adca6844b9c-kube-api-access-82mh6\") pod \"4b45ada4-ae75-43d4-b589-0adca6844b9c\" (UID: \"4b45ada4-ae75-43d4-b589-0adca6844b9c\") " Mar 20 15:59:34 crc kubenswrapper[4730]: I0320 15:59:34.186332 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4b45ada4-ae75-43d4-b589-0adca6844b9c-var-log-ovn\") pod \"4b45ada4-ae75-43d4-b589-0adca6844b9c\" (UID: \"4b45ada4-ae75-43d4-b589-0adca6844b9c\") " Mar 20 15:59:34 crc kubenswrapper[4730]: I0320 15:59:34.186421 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4b45ada4-ae75-43d4-b589-0adca6844b9c-var-run-ovn\") pod \"4b45ada4-ae75-43d4-b589-0adca6844b9c\" (UID: \"4b45ada4-ae75-43d4-b589-0adca6844b9c\") " Mar 20 15:59:34 crc kubenswrapper[4730]: I0320 15:59:34.186444 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4b45ada4-ae75-43d4-b589-0adca6844b9c-scripts\") pod \"4b45ada4-ae75-43d4-b589-0adca6844b9c\" (UID: \"4b45ada4-ae75-43d4-b589-0adca6844b9c\") " Mar 20 15:59:34 crc kubenswrapper[4730]: I0320 15:59:34.186490 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b45ada4-ae75-43d4-b589-0adca6844b9c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "4b45ada4-ae75-43d4-b589-0adca6844b9c" (UID: "4b45ada4-ae75-43d4-b589-0adca6844b9c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:59:34 crc kubenswrapper[4730]: I0320 15:59:34.186534 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b45ada4-ae75-43d4-b589-0adca6844b9c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "4b45ada4-ae75-43d4-b589-0adca6844b9c" (UID: "4b45ada4-ae75-43d4-b589-0adca6844b9c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 15:59:34 crc kubenswrapper[4730]: I0320 15:59:34.186740 4730 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4b45ada4-ae75-43d4-b589-0adca6844b9c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:34 crc kubenswrapper[4730]: I0320 15:59:34.186756 4730 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4b45ada4-ae75-43d4-b589-0adca6844b9c-var-run\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:34 crc kubenswrapper[4730]: I0320 15:59:34.186766 4730 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4b45ada4-ae75-43d4-b589-0adca6844b9c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:34 crc kubenswrapper[4730]: I0320 15:59:34.186996 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b45ada4-ae75-43d4-b589-0adca6844b9c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "4b45ada4-ae75-43d4-b589-0adca6844b9c" (UID: "4b45ada4-ae75-43d4-b589-0adca6844b9c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:59:34 crc kubenswrapper[4730]: I0320 15:59:34.187261 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b45ada4-ae75-43d4-b589-0adca6844b9c-scripts" (OuterVolumeSpecName: "scripts") pod "4b45ada4-ae75-43d4-b589-0adca6844b9c" (UID: "4b45ada4-ae75-43d4-b589-0adca6844b9c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:59:34 crc kubenswrapper[4730]: I0320 15:59:34.191492 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b45ada4-ae75-43d4-b589-0adca6844b9c-kube-api-access-82mh6" (OuterVolumeSpecName: "kube-api-access-82mh6") pod "4b45ada4-ae75-43d4-b589-0adca6844b9c" (UID: "4b45ada4-ae75-43d4-b589-0adca6844b9c"). InnerVolumeSpecName "kube-api-access-82mh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:59:34 crc kubenswrapper[4730]: I0320 15:59:34.288885 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4b45ada4-ae75-43d4-b589-0adca6844b9c-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:34 crc kubenswrapper[4730]: I0320 15:59:34.288923 4730 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4b45ada4-ae75-43d4-b589-0adca6844b9c-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:34 crc kubenswrapper[4730]: I0320 15:59:34.288957 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82mh6\" (UniqueName: \"kubernetes.io/projected/4b45ada4-ae75-43d4-b589-0adca6844b9c-kube-api-access-82mh6\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:34 crc kubenswrapper[4730]: I0320 15:59:34.761409 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c9def6e-27a0-4543-8d3c-07b3e4005b33","Type":"ContainerStarted","Data":"e3c73940d22384d548f188be48d121dfb04490ec2e9d0d3beddf5a5c59cb93dc"} Mar 20 15:59:34 crc kubenswrapper[4730]: I0320 15:59:34.761814 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c9def6e-27a0-4543-8d3c-07b3e4005b33","Type":"ContainerStarted","Data":"7d767175ee4bf8f5b9a3d59ca4ec9669048c7b2e864ff322c9c461b712e9559a"} Mar 20 15:59:34 crc kubenswrapper[4730]: I0320 15:59:34.761829 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c9def6e-27a0-4543-8d3c-07b3e4005b33","Type":"ContainerStarted","Data":"fcd654aff2b2879b9299de147a59bc5a0c3867d284c41e4555e76fc0c05a01a9"} Mar 20 15:59:34 crc kubenswrapper[4730]: I0320 15:59:34.761842 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c9def6e-27a0-4543-8d3c-07b3e4005b33","Type":"ContainerStarted","Data":"2e495f4dc74f748695dde7cba48269b5971295aa4d03e008732a13ba023705f5"} Mar 20 15:59:34 crc kubenswrapper[4730]: I0320 15:59:34.761854 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c9def6e-27a0-4543-8d3c-07b3e4005b33","Type":"ContainerStarted","Data":"b48c30fbc95e109dfd2a125c4e8c7ac80a6868d24e6d83ffc2ae5d7d375f0921"} Mar 20 15:59:34 crc kubenswrapper[4730]: I0320 15:59:34.764726 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gtrnp-config-kwcsn" event={"ID":"4b45ada4-ae75-43d4-b589-0adca6844b9c","Type":"ContainerDied","Data":"7de87b0142ae0b273ea772f996c36fe710c098df6bc0dc5943f169f3431a89b9"} Mar 20 15:59:34 crc kubenswrapper[4730]: I0320 15:59:34.764772 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7de87b0142ae0b273ea772f996c36fe710c098df6bc0dc5943f169f3431a89b9" Mar 20 15:59:34 crc kubenswrapper[4730]: I0320 15:59:34.764805 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gtrnp-config-kwcsn" Mar 20 15:59:35 crc kubenswrapper[4730]: I0320 15:59:35.229291 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-gtrnp-config-kwcsn"] Mar 20 15:59:35 crc kubenswrapper[4730]: I0320 15:59:35.236949 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-gtrnp-config-kwcsn"] Mar 20 15:59:35 crc kubenswrapper[4730]: I0320 15:59:35.542687 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b45ada4-ae75-43d4-b589-0adca6844b9c" path="/var/lib/kubelet/pods/4b45ada4-ae75-43d4-b589-0adca6844b9c/volumes" Mar 20 15:59:35 crc kubenswrapper[4730]: I0320 15:59:35.644459 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" Mar 20 15:59:35 crc kubenswrapper[4730]: I0320 15:59:35.719384 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6868c44cb9-4zxm5"] Mar 20 15:59:35 crc kubenswrapper[4730]: I0320 15:59:35.719875 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6868c44cb9-4zxm5" podUID="3806a27b-4a0f-439b-8660-d9ccd4bb0618" containerName="dnsmasq-dns" containerID="cri-o://1e16ce44f59bba62455cbdf7f59fe0150efea77ce27a7027dd9fa2ba8ec0f7d5" gracePeriod=10 Mar 20 15:59:35 crc kubenswrapper[4730]: I0320 15:59:35.781185 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c9def6e-27a0-4543-8d3c-07b3e4005b33","Type":"ContainerStarted","Data":"ff0f61ebd188c7c7fcc1a66f90e54fe081c59931d9d1d9c5e4b2e1d788b94e87"} Mar 20 15:59:35 crc kubenswrapper[4730]: I0320 15:59:35.781240 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c9def6e-27a0-4543-8d3c-07b3e4005b33","Type":"ContainerStarted","Data":"fd2c46769dd7001bd445b620fe9b18ea7ce61d1efb0a3f45063f2d9ca43eccbc"} Mar 20 15:59:35 crc kubenswrapper[4730]: I0320 15:59:35.818290 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.572823783 podStartE2EDuration="40.818271314s" podCreationTimestamp="2026-03-20 15:58:55 +0000 UTC" firstStartedPulling="2026-03-20 15:59:29.357980786 +0000 UTC m=+1228.571352155" lastFinishedPulling="2026-03-20 15:59:33.603428317 +0000 UTC m=+1232.816799686" observedRunningTime="2026-03-20 15:59:35.813005816 +0000 UTC m=+1235.026377195" watchObservedRunningTime="2026-03-20 15:59:35.818271314 +0000 UTC m=+1235.031642683" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.093219 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59fc649cc7-tct2h"] Mar 20 15:59:36 crc kubenswrapper[4730]: E0320 15:59:36.093662 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b45ada4-ae75-43d4-b589-0adca6844b9c" containerName="ovn-config" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.093685 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b45ada4-ae75-43d4-b589-0adca6844b9c" containerName="ovn-config" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.093887 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b45ada4-ae75-43d4-b589-0adca6844b9c" containerName="ovn-config" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.095303 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.102549 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.105214 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59fc649cc7-tct2h"] Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.245200 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-dns-swift-storage-0\") pod \"dnsmasq-dns-59fc649cc7-tct2h\" (UID: \"37d31419-eada-4b93-bc20-bac232ced058\") " pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.245565 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-dns-svc\") pod \"dnsmasq-dns-59fc649cc7-tct2h\" (UID: \"37d31419-eada-4b93-bc20-bac232ced058\") " pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.245595 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x9cc\" (UniqueName: \"kubernetes.io/projected/37d31419-eada-4b93-bc20-bac232ced058-kube-api-access-6x9cc\") pod \"dnsmasq-dns-59fc649cc7-tct2h\" (UID: \"37d31419-eada-4b93-bc20-bac232ced058\") " pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.245632 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-ovsdbserver-nb\") pod \"dnsmasq-dns-59fc649cc7-tct2h\" (UID: \"37d31419-eada-4b93-bc20-bac232ced058\") " pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.245656 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-config\") pod \"dnsmasq-dns-59fc649cc7-tct2h\" (UID: \"37d31419-eada-4b93-bc20-bac232ced058\") " pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.245697 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-ovsdbserver-sb\") pod \"dnsmasq-dns-59fc649cc7-tct2h\" (UID: \"37d31419-eada-4b93-bc20-bac232ced058\") " pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.261331 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6868c44cb9-4zxm5" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.347341 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-dns-svc\") pod \"dnsmasq-dns-59fc649cc7-tct2h\" (UID: \"37d31419-eada-4b93-bc20-bac232ced058\") " pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.347417 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x9cc\" (UniqueName: \"kubernetes.io/projected/37d31419-eada-4b93-bc20-bac232ced058-kube-api-access-6x9cc\") pod \"dnsmasq-dns-59fc649cc7-tct2h\" (UID: \"37d31419-eada-4b93-bc20-bac232ced058\") " pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.347471 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-ovsdbserver-nb\") pod \"dnsmasq-dns-59fc649cc7-tct2h\" (UID: \"37d31419-eada-4b93-bc20-bac232ced058\") " pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.347507 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-config\") pod \"dnsmasq-dns-59fc649cc7-tct2h\" (UID: \"37d31419-eada-4b93-bc20-bac232ced058\") " pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.347569 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-ovsdbserver-sb\") pod \"dnsmasq-dns-59fc649cc7-tct2h\" (UID: \"37d31419-eada-4b93-bc20-bac232ced058\") " pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.347623 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-dns-swift-storage-0\") pod \"dnsmasq-dns-59fc649cc7-tct2h\" (UID: \"37d31419-eada-4b93-bc20-bac232ced058\") " pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.348714 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-dns-svc\") pod \"dnsmasq-dns-59fc649cc7-tct2h\" (UID: \"37d31419-eada-4b93-bc20-bac232ced058\") " pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.348759 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-ovsdbserver-nb\") pod \"dnsmasq-dns-59fc649cc7-tct2h\" (UID: \"37d31419-eada-4b93-bc20-bac232ced058\") " pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.348940 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-dns-swift-storage-0\") pod \"dnsmasq-dns-59fc649cc7-tct2h\" (UID: \"37d31419-eada-4b93-bc20-bac232ced058\") " pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.349129 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-ovsdbserver-sb\") pod \"dnsmasq-dns-59fc649cc7-tct2h\" (UID: \"37d31419-eada-4b93-bc20-bac232ced058\") " pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.349672 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-config\") pod \"dnsmasq-dns-59fc649cc7-tct2h\" (UID: \"37d31419-eada-4b93-bc20-bac232ced058\") " pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.350875 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.353028 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.360560 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.372119 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x9cc\" (UniqueName: \"kubernetes.io/projected/37d31419-eada-4b93-bc20-bac232ced058-kube-api-access-6x9cc\") pod \"dnsmasq-dns-59fc649cc7-tct2h\" (UID: \"37d31419-eada-4b93-bc20-bac232ced058\") " pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.420648 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.448484 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt2hc\" (UniqueName: \"kubernetes.io/projected/3806a27b-4a0f-439b-8660-d9ccd4bb0618-kube-api-access-lt2hc\") pod \"3806a27b-4a0f-439b-8660-d9ccd4bb0618\" (UID: \"3806a27b-4a0f-439b-8660-d9ccd4bb0618\") " Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.448607 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3806a27b-4a0f-439b-8660-d9ccd4bb0618-config\") pod \"3806a27b-4a0f-439b-8660-d9ccd4bb0618\" (UID: \"3806a27b-4a0f-439b-8660-d9ccd4bb0618\") " Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.448653 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3806a27b-4a0f-439b-8660-d9ccd4bb0618-dns-svc\") pod \"3806a27b-4a0f-439b-8660-d9ccd4bb0618\" (UID: \"3806a27b-4a0f-439b-8660-d9ccd4bb0618\") " Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.448696 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3806a27b-4a0f-439b-8660-d9ccd4bb0618-ovsdbserver-sb\") pod \"3806a27b-4a0f-439b-8660-d9ccd4bb0618\" (UID: \"3806a27b-4a0f-439b-8660-d9ccd4bb0618\") " Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.448767 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3806a27b-4a0f-439b-8660-d9ccd4bb0618-ovsdbserver-nb\") pod \"3806a27b-4a0f-439b-8660-d9ccd4bb0618\" (UID: \"3806a27b-4a0f-439b-8660-d9ccd4bb0618\") " Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.452541 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3806a27b-4a0f-439b-8660-d9ccd4bb0618-kube-api-access-lt2hc" (OuterVolumeSpecName: "kube-api-access-lt2hc") pod "3806a27b-4a0f-439b-8660-d9ccd4bb0618" (UID: "3806a27b-4a0f-439b-8660-d9ccd4bb0618"). InnerVolumeSpecName "kube-api-access-lt2hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.507938 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3806a27b-4a0f-439b-8660-d9ccd4bb0618-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3806a27b-4a0f-439b-8660-d9ccd4bb0618" (UID: "3806a27b-4a0f-439b-8660-d9ccd4bb0618"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.522406 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3806a27b-4a0f-439b-8660-d9ccd4bb0618-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3806a27b-4a0f-439b-8660-d9ccd4bb0618" (UID: "3806a27b-4a0f-439b-8660-d9ccd4bb0618"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.522509 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3806a27b-4a0f-439b-8660-d9ccd4bb0618-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3806a27b-4a0f-439b-8660-d9ccd4bb0618" (UID: "3806a27b-4a0f-439b-8660-d9ccd4bb0618"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.529934 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3806a27b-4a0f-439b-8660-d9ccd4bb0618-config" (OuterVolumeSpecName: "config") pod "3806a27b-4a0f-439b-8660-d9ccd4bb0618" (UID: "3806a27b-4a0f-439b-8660-d9ccd4bb0618"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.550108 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3806a27b-4a0f-439b-8660-d9ccd4bb0618-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.550142 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lt2hc\" (UniqueName: \"kubernetes.io/projected/3806a27b-4a0f-439b-8660-d9ccd4bb0618-kube-api-access-lt2hc\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.550153 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3806a27b-4a0f-439b-8660-d9ccd4bb0618-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.550161 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3806a27b-4a0f-439b-8660-d9ccd4bb0618-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.550169 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3806a27b-4a0f-439b-8660-d9ccd4bb0618-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.612342 4730 scope.go:117] "RemoveContainer" containerID="3950e99a8167c1c32630e01067078d701c75fdf49d8f8666a31a81f7a02ba1d9" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.793452 4730 generic.go:334] "Generic (PLEG): container finished" podID="3806a27b-4a0f-439b-8660-d9ccd4bb0618" containerID="1e16ce44f59bba62455cbdf7f59fe0150efea77ce27a7027dd9fa2ba8ec0f7d5" exitCode=0 Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.793502 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6868c44cb9-4zxm5" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.793556 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6868c44cb9-4zxm5" event={"ID":"3806a27b-4a0f-439b-8660-d9ccd4bb0618","Type":"ContainerDied","Data":"1e16ce44f59bba62455cbdf7f59fe0150efea77ce27a7027dd9fa2ba8ec0f7d5"} Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.793580 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6868c44cb9-4zxm5" event={"ID":"3806a27b-4a0f-439b-8660-d9ccd4bb0618","Type":"ContainerDied","Data":"1d80c1a200a2b02176c3920aea1d4ac3bcd064cd7f985eb732053050a5d3edc7"} Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.793599 4730 scope.go:117] "RemoveContainer" containerID="1e16ce44f59bba62455cbdf7f59fe0150efea77ce27a7027dd9fa2ba8ec0f7d5" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.804128 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.827310 4730 scope.go:117] "RemoveContainer" containerID="065e2b37d1a5fb4fd864e13cde37d04579827706198a7e6a7ccea3b914128eb1" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.877050 4730 scope.go:117] "RemoveContainer" containerID="1e16ce44f59bba62455cbdf7f59fe0150efea77ce27a7027dd9fa2ba8ec0f7d5" Mar 20 15:59:36 crc kubenswrapper[4730]: E0320 15:59:36.877927 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e16ce44f59bba62455cbdf7f59fe0150efea77ce27a7027dd9fa2ba8ec0f7d5\": container with ID starting with 1e16ce44f59bba62455cbdf7f59fe0150efea77ce27a7027dd9fa2ba8ec0f7d5 not found: ID does not exist" containerID="1e16ce44f59bba62455cbdf7f59fe0150efea77ce27a7027dd9fa2ba8ec0f7d5" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.877968 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e16ce44f59bba62455cbdf7f59fe0150efea77ce27a7027dd9fa2ba8ec0f7d5"} err="failed to get container status \"1e16ce44f59bba62455cbdf7f59fe0150efea77ce27a7027dd9fa2ba8ec0f7d5\": rpc error: code = NotFound desc = could not find container \"1e16ce44f59bba62455cbdf7f59fe0150efea77ce27a7027dd9fa2ba8ec0f7d5\": container with ID starting with 1e16ce44f59bba62455cbdf7f59fe0150efea77ce27a7027dd9fa2ba8ec0f7d5 not found: ID does not exist" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.877995 4730 scope.go:117] "RemoveContainer" containerID="065e2b37d1a5fb4fd864e13cde37d04579827706198a7e6a7ccea3b914128eb1" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.882524 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6868c44cb9-4zxm5"] Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.889339 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6868c44cb9-4zxm5"] Mar 20 15:59:36 crc kubenswrapper[4730]: E0320 15:59:36.889559 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"065e2b37d1a5fb4fd864e13cde37d04579827706198a7e6a7ccea3b914128eb1\": container with ID starting with 065e2b37d1a5fb4fd864e13cde37d04579827706198a7e6a7ccea3b914128eb1 not found: ID does not exist" containerID="065e2b37d1a5fb4fd864e13cde37d04579827706198a7e6a7ccea3b914128eb1" Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.889591 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"065e2b37d1a5fb4fd864e13cde37d04579827706198a7e6a7ccea3b914128eb1"} err="failed to get container status \"065e2b37d1a5fb4fd864e13cde37d04579827706198a7e6a7ccea3b914128eb1\": rpc error: code = NotFound desc = could not find container \"065e2b37d1a5fb4fd864e13cde37d04579827706198a7e6a7ccea3b914128eb1\": container with ID starting with 065e2b37d1a5fb4fd864e13cde37d04579827706198a7e6a7ccea3b914128eb1 not found: ID does not exist" Mar 20 15:59:36 crc kubenswrapper[4730]: W0320 15:59:36.893676 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37d31419_eada_4b93_bc20_bac232ced058.slice/crio-3bd944df856483bb32661f18779d6ead5d56c48a59d438569d6061308ace393b WatchSource:0}: Error finding container 3bd944df856483bb32661f18779d6ead5d56c48a59d438569d6061308ace393b: Status 404 returned error can't find the container with id 3bd944df856483bb32661f18779d6ead5d56c48a59d438569d6061308ace393b Mar 20 15:59:36 crc kubenswrapper[4730]: I0320 15:59:36.897850 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59fc649cc7-tct2h"] Mar 20 15:59:37 crc kubenswrapper[4730]: I0320 15:59:37.546109 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3806a27b-4a0f-439b-8660-d9ccd4bb0618" path="/var/lib/kubelet/pods/3806a27b-4a0f-439b-8660-d9ccd4bb0618/volumes" Mar 20 15:59:37 crc kubenswrapper[4730]: I0320 15:59:37.803978 4730 generic.go:334] "Generic (PLEG): container finished" podID="37d31419-eada-4b93-bc20-bac232ced058" containerID="44eee6d0807ce3fdd8bb3db86c31803cf1f646803d40fa88d8701a25be8c2aaa" exitCode=0 Mar 20 15:59:37 crc kubenswrapper[4730]: I0320 15:59:37.804046 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" event={"ID":"37d31419-eada-4b93-bc20-bac232ced058","Type":"ContainerDied","Data":"44eee6d0807ce3fdd8bb3db86c31803cf1f646803d40fa88d8701a25be8c2aaa"} Mar 20 15:59:37 crc kubenswrapper[4730]: I0320 15:59:37.804072 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" event={"ID":"37d31419-eada-4b93-bc20-bac232ced058","Type":"ContainerStarted","Data":"3bd944df856483bb32661f18779d6ead5d56c48a59d438569d6061308ace393b"} Mar 20 15:59:38 crc kubenswrapper[4730]: I0320 15:59:38.818810 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" event={"ID":"37d31419-eada-4b93-bc20-bac232ced058","Type":"ContainerStarted","Data":"bb5b7167e8f80b19b6b3c7bf7988748aafee8cd2702715c1020def5b6b2b9fb6"} Mar 20 15:59:38 crc kubenswrapper[4730]: I0320 15:59:38.859464 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" podStartSLOduration=2.859429356 podStartE2EDuration="2.859429356s" podCreationTimestamp="2026-03-20 15:59:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:59:38.848568063 +0000 UTC m=+1238.061939442" watchObservedRunningTime="2026-03-20 15:59:38.859429356 +0000 UTC m=+1238.072800735" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.175446 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.488438 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/notifications-rabbitmq-server-0" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.544429 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-87csx"] Mar 20 15:59:39 crc kubenswrapper[4730]: E0320 15:59:39.545000 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3806a27b-4a0f-439b-8660-d9ccd4bb0618" containerName="init" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.545074 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="3806a27b-4a0f-439b-8660-d9ccd4bb0618" containerName="init" Mar 20 15:59:39 crc kubenswrapper[4730]: E0320 15:59:39.545154 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3806a27b-4a0f-439b-8660-d9ccd4bb0618" containerName="dnsmasq-dns" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.545211 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="3806a27b-4a0f-439b-8660-d9ccd4bb0618" containerName="dnsmasq-dns" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.545449 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="3806a27b-4a0f-439b-8660-d9ccd4bb0618" containerName="dnsmasq-dns" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.546098 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-87csx" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.603485 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-87csx"] Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.665051 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-9f59-account-create-update-vmg5j"] Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.667113 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9f59-account-create-update-vmg5j" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.667617 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6118ed31-b8d7-4a7c-8769-69d996d26915-operator-scripts\") pod \"cinder-db-create-87csx\" (UID: \"6118ed31-b8d7-4a7c-8769-69d996d26915\") " pod="openstack/cinder-db-create-87csx" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.667662 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc4nn\" (UniqueName: \"kubernetes.io/projected/6118ed31-b8d7-4a7c-8769-69d996d26915-kube-api-access-jc4nn\") pod \"cinder-db-create-87csx\" (UID: \"6118ed31-b8d7-4a7c-8769-69d996d26915\") " pod="openstack/cinder-db-create-87csx" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.674552 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.679930 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9f59-account-create-update-vmg5j"] Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.769350 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmbg2\" (UniqueName: \"kubernetes.io/projected/44a72513-75fb-4b7e-912b-d28fa63d050a-kube-api-access-cmbg2\") pod \"cinder-9f59-account-create-update-vmg5j\" (UID: \"44a72513-75fb-4b7e-912b-d28fa63d050a\") " pod="openstack/cinder-9f59-account-create-update-vmg5j" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.769434 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6118ed31-b8d7-4a7c-8769-69d996d26915-operator-scripts\") pod \"cinder-db-create-87csx\" (UID: \"6118ed31-b8d7-4a7c-8769-69d996d26915\") " pod="openstack/cinder-db-create-87csx" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.769454 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc4nn\" (UniqueName: \"kubernetes.io/projected/6118ed31-b8d7-4a7c-8769-69d996d26915-kube-api-access-jc4nn\") pod \"cinder-db-create-87csx\" (UID: \"6118ed31-b8d7-4a7c-8769-69d996d26915\") " pod="openstack/cinder-db-create-87csx" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.769487 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44a72513-75fb-4b7e-912b-d28fa63d050a-operator-scripts\") pod \"cinder-9f59-account-create-update-vmg5j\" (UID: \"44a72513-75fb-4b7e-912b-d28fa63d050a\") " pod="openstack/cinder-9f59-account-create-update-vmg5j" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.770165 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6118ed31-b8d7-4a7c-8769-69d996d26915-operator-scripts\") pod \"cinder-db-create-87csx\" (UID: \"6118ed31-b8d7-4a7c-8769-69d996d26915\") " pod="openstack/cinder-db-create-87csx" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.784410 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.788728 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc4nn\" (UniqueName: \"kubernetes.io/projected/6118ed31-b8d7-4a7c-8769-69d996d26915-kube-api-access-jc4nn\") pod \"cinder-db-create-87csx\" (UID: \"6118ed31-b8d7-4a7c-8769-69d996d26915\") " pod="openstack/cinder-db-create-87csx" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.851912 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.872029 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-9q2kz"] Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.873276 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmbg2\" (UniqueName: \"kubernetes.io/projected/44a72513-75fb-4b7e-912b-d28fa63d050a-kube-api-access-cmbg2\") pod \"cinder-9f59-account-create-update-vmg5j\" (UID: \"44a72513-75fb-4b7e-912b-d28fa63d050a\") " pod="openstack/cinder-9f59-account-create-update-vmg5j" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.873374 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44a72513-75fb-4b7e-912b-d28fa63d050a-operator-scripts\") pod \"cinder-9f59-account-create-update-vmg5j\" (UID: \"44a72513-75fb-4b7e-912b-d28fa63d050a\") " pod="openstack/cinder-9f59-account-create-update-vmg5j" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.873392 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9q2kz" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.874141 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44a72513-75fb-4b7e-912b-d28fa63d050a-operator-scripts\") pod \"cinder-9f59-account-create-update-vmg5j\" (UID: \"44a72513-75fb-4b7e-912b-d28fa63d050a\") " pod="openstack/cinder-9f59-account-create-update-vmg5j" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.884670 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-9q2kz"] Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.913426 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-3959-account-create-update-qxd89"] Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.914466 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3959-account-create-update-qxd89" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.916511 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-87csx" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.917976 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmbg2\" (UniqueName: \"kubernetes.io/projected/44a72513-75fb-4b7e-912b-d28fa63d050a-kube-api-access-cmbg2\") pod \"cinder-9f59-account-create-update-vmg5j\" (UID: \"44a72513-75fb-4b7e-912b-d28fa63d050a\") " pod="openstack/cinder-9f59-account-create-update-vmg5j" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.919715 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.961364 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3959-account-create-update-qxd89"] Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.976199 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad93c0a8-34d6-4fee-985c-7c7307f00c0c-operator-scripts\") pod \"barbican-db-create-9q2kz\" (UID: \"ad93c0a8-34d6-4fee-985c-7c7307f00c0c\") " pod="openstack/barbican-db-create-9q2kz" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.976322 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnfnf\" (UniqueName: \"kubernetes.io/projected/ad93c0a8-34d6-4fee-985c-7c7307f00c0c-kube-api-access-gnfnf\") pod \"barbican-db-create-9q2kz\" (UID: \"ad93c0a8-34d6-4fee-985c-7c7307f00c0c\") " pod="openstack/barbican-db-create-9q2kz" Mar 20 15:59:39 crc kubenswrapper[4730]: I0320 15:59:39.983537 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9f59-account-create-update-vmg5j" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.041473 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-rb4pw"] Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.077095 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rb4pw"] Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.077185 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rb4pw" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.077819 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad93c0a8-34d6-4fee-985c-7c7307f00c0c-operator-scripts\") pod \"barbican-db-create-9q2kz\" (UID: \"ad93c0a8-34d6-4fee-985c-7c7307f00c0c\") " pod="openstack/barbican-db-create-9q2kz" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.078102 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2tsx\" (UniqueName: \"kubernetes.io/projected/e01c2575-5301-494a-bf47-9a6053de9c64-kube-api-access-z2tsx\") pod \"barbican-3959-account-create-update-qxd89\" (UID: \"e01c2575-5301-494a-bf47-9a6053de9c64\") " pod="openstack/barbican-3959-account-create-update-qxd89" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.078131 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnfnf\" (UniqueName: \"kubernetes.io/projected/ad93c0a8-34d6-4fee-985c-7c7307f00c0c-kube-api-access-gnfnf\") pod \"barbican-db-create-9q2kz\" (UID: \"ad93c0a8-34d6-4fee-985c-7c7307f00c0c\") " pod="openstack/barbican-db-create-9q2kz" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.078149 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e01c2575-5301-494a-bf47-9a6053de9c64-operator-scripts\") pod \"barbican-3959-account-create-update-qxd89\" (UID: \"e01c2575-5301-494a-bf47-9a6053de9c64\") " pod="openstack/barbican-3959-account-create-update-qxd89" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.078576 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad93c0a8-34d6-4fee-985c-7c7307f00c0c-operator-scripts\") pod \"barbican-db-create-9q2kz\" (UID: \"ad93c0a8-34d6-4fee-985c-7c7307f00c0c\") " pod="openstack/barbican-db-create-9q2kz" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.079764 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.079822 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.080008 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.087043 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pjvk4" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.119169 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnfnf\" (UniqueName: \"kubernetes.io/projected/ad93c0a8-34d6-4fee-985c-7c7307f00c0c-kube-api-access-gnfnf\") pod \"barbican-db-create-9q2kz\" (UID: \"ad93c0a8-34d6-4fee-985c-7c7307f00c0c\") " pod="openstack/barbican-db-create-9q2kz" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.179757 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e01c2575-5301-494a-bf47-9a6053de9c64-operator-scripts\") pod \"barbican-3959-account-create-update-qxd89\" (UID: \"e01c2575-5301-494a-bf47-9a6053de9c64\") " pod="openstack/barbican-3959-account-create-update-qxd89" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.179898 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92a7eed8-de7c-4816-8bd9-e922ace376ad-config-data\") pod \"keystone-db-sync-rb4pw\" (UID: \"92a7eed8-de7c-4816-8bd9-e922ace376ad\") " pod="openstack/keystone-db-sync-rb4pw" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.179931 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4hmq\" (UniqueName: \"kubernetes.io/projected/92a7eed8-de7c-4816-8bd9-e922ace376ad-kube-api-access-h4hmq\") pod \"keystone-db-sync-rb4pw\" (UID: \"92a7eed8-de7c-4816-8bd9-e922ace376ad\") " pod="openstack/keystone-db-sync-rb4pw" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.179980 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92a7eed8-de7c-4816-8bd9-e922ace376ad-combined-ca-bundle\") pod \"keystone-db-sync-rb4pw\" (UID: \"92a7eed8-de7c-4816-8bd9-e922ace376ad\") " pod="openstack/keystone-db-sync-rb4pw" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.180002 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2tsx\" (UniqueName: \"kubernetes.io/projected/e01c2575-5301-494a-bf47-9a6053de9c64-kube-api-access-z2tsx\") pod \"barbican-3959-account-create-update-qxd89\" (UID: \"e01c2575-5301-494a-bf47-9a6053de9c64\") " pod="openstack/barbican-3959-account-create-update-qxd89" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.182293 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e01c2575-5301-494a-bf47-9a6053de9c64-operator-scripts\") pod \"barbican-3959-account-create-update-qxd89\" (UID: \"e01c2575-5301-494a-bf47-9a6053de9c64\") " pod="openstack/barbican-3959-account-create-update-qxd89" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.199587 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9q2kz" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.201793 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2tsx\" (UniqueName: \"kubernetes.io/projected/e01c2575-5301-494a-bf47-9a6053de9c64-kube-api-access-z2tsx\") pod \"barbican-3959-account-create-update-qxd89\" (UID: \"e01c2575-5301-494a-bf47-9a6053de9c64\") " pod="openstack/barbican-3959-account-create-update-qxd89" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.282953 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92a7eed8-de7c-4816-8bd9-e922ace376ad-config-data\") pod \"keystone-db-sync-rb4pw\" (UID: \"92a7eed8-de7c-4816-8bd9-e922ace376ad\") " pod="openstack/keystone-db-sync-rb4pw" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.282997 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4hmq\" (UniqueName: \"kubernetes.io/projected/92a7eed8-de7c-4816-8bd9-e922ace376ad-kube-api-access-h4hmq\") pod \"keystone-db-sync-rb4pw\" (UID: \"92a7eed8-de7c-4816-8bd9-e922ace376ad\") " pod="openstack/keystone-db-sync-rb4pw" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.283046 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92a7eed8-de7c-4816-8bd9-e922ace376ad-combined-ca-bundle\") pod \"keystone-db-sync-rb4pw\" (UID: \"92a7eed8-de7c-4816-8bd9-e922ace376ad\") " pod="openstack/keystone-db-sync-rb4pw" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.290108 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92a7eed8-de7c-4816-8bd9-e922ace376ad-combined-ca-bundle\") pod \"keystone-db-sync-rb4pw\" (UID: \"92a7eed8-de7c-4816-8bd9-e922ace376ad\") " pod="openstack/keystone-db-sync-rb4pw" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.291957 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92a7eed8-de7c-4816-8bd9-e922ace376ad-config-data\") pod \"keystone-db-sync-rb4pw\" (UID: \"92a7eed8-de7c-4816-8bd9-e922ace376ad\") " pod="openstack/keystone-db-sync-rb4pw" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.307610 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4hmq\" (UniqueName: \"kubernetes.io/projected/92a7eed8-de7c-4816-8bd9-e922ace376ad-kube-api-access-h4hmq\") pod \"keystone-db-sync-rb4pw\" (UID: \"92a7eed8-de7c-4816-8bd9-e922ace376ad\") " pod="openstack/keystone-db-sync-rb4pw" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.373678 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3959-account-create-update-qxd89" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.393437 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rb4pw" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.504096 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-87csx"] Mar 20 15:59:40 crc kubenswrapper[4730]: W0320 15:59:40.518706 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6118ed31_b8d7_4a7c_8769_69d996d26915.slice/crio-6d472469b6e16e310262d75432013a7a3a34ad4fdb01866ebdfaf12a061b5465 WatchSource:0}: Error finding container 6d472469b6e16e310262d75432013a7a3a34ad4fdb01866ebdfaf12a061b5465: Status 404 returned error can't find the container with id 6d472469b6e16e310262d75432013a7a3a34ad4fdb01866ebdfaf12a061b5465 Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.585690 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9f59-account-create-update-vmg5j"] Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.668763 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-9q2kz"] Mar 20 15:59:40 crc kubenswrapper[4730]: W0320 15:59:40.674223 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad93c0a8_34d6_4fee_985c_7c7307f00c0c.slice/crio-6c62958dfa715bbaad2cb18642675288a24bc6d6b2b58edbfb494fdc8d1bb7c5 WatchSource:0}: Error finding container 6c62958dfa715bbaad2cb18642675288a24bc6d6b2b58edbfb494fdc8d1bb7c5: Status 404 returned error can't find the container with id 6c62958dfa715bbaad2cb18642675288a24bc6d6b2b58edbfb494fdc8d1bb7c5 Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.859172 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-87csx" event={"ID":"6118ed31-b8d7-4a7c-8769-69d996d26915","Type":"ContainerStarted","Data":"a18801b5a50e28a1d043f07d02846b12496eaa787cc63d296052b7f86700e382"} Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.859214 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-87csx" event={"ID":"6118ed31-b8d7-4a7c-8769-69d996d26915","Type":"ContainerStarted","Data":"6d472469b6e16e310262d75432013a7a3a34ad4fdb01866ebdfaf12a061b5465"} Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.861809 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9f59-account-create-update-vmg5j" event={"ID":"44a72513-75fb-4b7e-912b-d28fa63d050a","Type":"ContainerStarted","Data":"ecddce73fd871590be8e4104469454a63bf36c8d5e335fcb1236e7e17748fcf3"} Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.861857 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9f59-account-create-update-vmg5j" event={"ID":"44a72513-75fb-4b7e-912b-d28fa63d050a","Type":"ContainerStarted","Data":"6a094396f0bb1056f88ceb040f7dab6620e9ea44941063752d5aa89e4f67a92a"} Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.864460 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9q2kz" event={"ID":"ad93c0a8-34d6-4fee-985c-7c7307f00c0c","Type":"ContainerStarted","Data":"619c70ff24e78ebd6137bc20c79ee2dc5949bf1cca622b03e9fc4227379e48f4"} Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.864489 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9q2kz" event={"ID":"ad93c0a8-34d6-4fee-985c-7c7307f00c0c","Type":"ContainerStarted","Data":"6c62958dfa715bbaad2cb18642675288a24bc6d6b2b58edbfb494fdc8d1bb7c5"} Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.871389 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3959-account-create-update-qxd89"] Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.895982 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rb4pw"] Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.903566 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-87csx" podStartSLOduration=1.903547689 podStartE2EDuration="1.903547689s" podCreationTimestamp="2026-03-20 15:59:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:59:40.884871811 +0000 UTC m=+1240.098243190" watchObservedRunningTime="2026-03-20 15:59:40.903547689 +0000 UTC m=+1240.116919058" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.928488 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-9q2kz" podStartSLOduration=1.9284704860000002 podStartE2EDuration="1.928470486s" podCreationTimestamp="2026-03-20 15:59:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:59:40.901563165 +0000 UTC m=+1240.114934534" watchObservedRunningTime="2026-03-20 15:59:40.928470486 +0000 UTC m=+1240.141841855" Mar 20 15:59:40 crc kubenswrapper[4730]: I0320 15:59:40.938142 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-9f59-account-create-update-vmg5j" podStartSLOduration=1.938123612 podStartE2EDuration="1.938123612s" podCreationTimestamp="2026-03-20 15:59:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:59:40.915400684 +0000 UTC m=+1240.128772053" watchObservedRunningTime="2026-03-20 15:59:40.938123612 +0000 UTC m=+1240.151494981" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.296935 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-ns6b5"] Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.299377 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-ns6b5" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.303596 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.309379 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-h4zsn" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.315682 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-ns6b5"] Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.372187 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-qpb6s"] Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.373443 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qpb6s" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.380781 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-qpb6s"] Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.403816 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pz9h\" (UniqueName: \"kubernetes.io/projected/9577f66b-a45e-4d51-9d87-4ae757819182-kube-api-access-9pz9h\") pod \"watcher-db-sync-ns6b5\" (UID: \"9577f66b-a45e-4d51-9d87-4ae757819182\") " pod="openstack/watcher-db-sync-ns6b5" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.403890 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9577f66b-a45e-4d51-9d87-4ae757819182-combined-ca-bundle\") pod \"watcher-db-sync-ns6b5\" (UID: \"9577f66b-a45e-4d51-9d87-4ae757819182\") " pod="openstack/watcher-db-sync-ns6b5" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.403922 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9577f66b-a45e-4d51-9d87-4ae757819182-db-sync-config-data\") pod \"watcher-db-sync-ns6b5\" (UID: \"9577f66b-a45e-4d51-9d87-4ae757819182\") " pod="openstack/watcher-db-sync-ns6b5" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.403961 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9577f66b-a45e-4d51-9d87-4ae757819182-config-data\") pod \"watcher-db-sync-ns6b5\" (UID: \"9577f66b-a45e-4d51-9d87-4ae757819182\") " pod="openstack/watcher-db-sync-ns6b5" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.466542 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c423-account-create-update-dcjc2"] Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.467573 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c423-account-create-update-dcjc2" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.469728 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.486387 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c423-account-create-update-dcjc2"] Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.505810 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pz9h\" (UniqueName: \"kubernetes.io/projected/9577f66b-a45e-4d51-9d87-4ae757819182-kube-api-access-9pz9h\") pod \"watcher-db-sync-ns6b5\" (UID: \"9577f66b-a45e-4d51-9d87-4ae757819182\") " pod="openstack/watcher-db-sync-ns6b5" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.506020 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h52h\" (UniqueName: \"kubernetes.io/projected/1c94c6d8-4c40-455a-a536-7c64e3838986-kube-api-access-8h52h\") pod \"neutron-db-create-qpb6s\" (UID: \"1c94c6d8-4c40-455a-a536-7c64e3838986\") " pod="openstack/neutron-db-create-qpb6s" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.506107 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c94c6d8-4c40-455a-a536-7c64e3838986-operator-scripts\") pod \"neutron-db-create-qpb6s\" (UID: \"1c94c6d8-4c40-455a-a536-7c64e3838986\") " pod="openstack/neutron-db-create-qpb6s" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.506149 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9577f66b-a45e-4d51-9d87-4ae757819182-combined-ca-bundle\") pod \"watcher-db-sync-ns6b5\" (UID: \"9577f66b-a45e-4d51-9d87-4ae757819182\") " pod="openstack/watcher-db-sync-ns6b5" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.506179 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9577f66b-a45e-4d51-9d87-4ae757819182-db-sync-config-data\") pod \"watcher-db-sync-ns6b5\" (UID: \"9577f66b-a45e-4d51-9d87-4ae757819182\") " pod="openstack/watcher-db-sync-ns6b5" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.506236 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9577f66b-a45e-4d51-9d87-4ae757819182-config-data\") pod \"watcher-db-sync-ns6b5\" (UID: \"9577f66b-a45e-4d51-9d87-4ae757819182\") " pod="openstack/watcher-db-sync-ns6b5" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.512294 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9577f66b-a45e-4d51-9d87-4ae757819182-config-data\") pod \"watcher-db-sync-ns6b5\" (UID: \"9577f66b-a45e-4d51-9d87-4ae757819182\") " pod="openstack/watcher-db-sync-ns6b5" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.512534 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9577f66b-a45e-4d51-9d87-4ae757819182-combined-ca-bundle\") pod \"watcher-db-sync-ns6b5\" (UID: \"9577f66b-a45e-4d51-9d87-4ae757819182\") " pod="openstack/watcher-db-sync-ns6b5" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.527866 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9577f66b-a45e-4d51-9d87-4ae757819182-db-sync-config-data\") pod \"watcher-db-sync-ns6b5\" (UID: \"9577f66b-a45e-4d51-9d87-4ae757819182\") " pod="openstack/watcher-db-sync-ns6b5" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.528595 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pz9h\" (UniqueName: \"kubernetes.io/projected/9577f66b-a45e-4d51-9d87-4ae757819182-kube-api-access-9pz9h\") pod \"watcher-db-sync-ns6b5\" (UID: \"9577f66b-a45e-4d51-9d87-4ae757819182\") " pod="openstack/watcher-db-sync-ns6b5" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.616362 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06e5575b-c67a-46fe-8502-efc341523de2-operator-scripts\") pod \"neutron-c423-account-create-update-dcjc2\" (UID: \"06e5575b-c67a-46fe-8502-efc341523de2\") " pod="openstack/neutron-c423-account-create-update-dcjc2" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.616464 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h52h\" (UniqueName: \"kubernetes.io/projected/1c94c6d8-4c40-455a-a536-7c64e3838986-kube-api-access-8h52h\") pod \"neutron-db-create-qpb6s\" (UID: \"1c94c6d8-4c40-455a-a536-7c64e3838986\") " pod="openstack/neutron-db-create-qpb6s" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.616517 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c94c6d8-4c40-455a-a536-7c64e3838986-operator-scripts\") pod \"neutron-db-create-qpb6s\" (UID: \"1c94c6d8-4c40-455a-a536-7c64e3838986\") " pod="openstack/neutron-db-create-qpb6s" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.616749 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r4t9\" (UniqueName: \"kubernetes.io/projected/06e5575b-c67a-46fe-8502-efc341523de2-kube-api-access-7r4t9\") pod \"neutron-c423-account-create-update-dcjc2\" (UID: \"06e5575b-c67a-46fe-8502-efc341523de2\") " pod="openstack/neutron-c423-account-create-update-dcjc2" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.625331 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c94c6d8-4c40-455a-a536-7c64e3838986-operator-scripts\") pod \"neutron-db-create-qpb6s\" (UID: \"1c94c6d8-4c40-455a-a536-7c64e3838986\") " pod="openstack/neutron-db-create-qpb6s" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.626033 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-ns6b5" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.650913 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h52h\" (UniqueName: \"kubernetes.io/projected/1c94c6d8-4c40-455a-a536-7c64e3838986-kube-api-access-8h52h\") pod \"neutron-db-create-qpb6s\" (UID: \"1c94c6d8-4c40-455a-a536-7c64e3838986\") " pod="openstack/neutron-db-create-qpb6s" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.695331 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qpb6s" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.718333 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r4t9\" (UniqueName: \"kubernetes.io/projected/06e5575b-c67a-46fe-8502-efc341523de2-kube-api-access-7r4t9\") pod \"neutron-c423-account-create-update-dcjc2\" (UID: \"06e5575b-c67a-46fe-8502-efc341523de2\") " pod="openstack/neutron-c423-account-create-update-dcjc2" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.718453 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06e5575b-c67a-46fe-8502-efc341523de2-operator-scripts\") pod \"neutron-c423-account-create-update-dcjc2\" (UID: \"06e5575b-c67a-46fe-8502-efc341523de2\") " pod="openstack/neutron-c423-account-create-update-dcjc2" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.719218 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06e5575b-c67a-46fe-8502-efc341523de2-operator-scripts\") pod \"neutron-c423-account-create-update-dcjc2\" (UID: \"06e5575b-c67a-46fe-8502-efc341523de2\") " pod="openstack/neutron-c423-account-create-update-dcjc2" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.740838 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r4t9\" (UniqueName: \"kubernetes.io/projected/06e5575b-c67a-46fe-8502-efc341523de2-kube-api-access-7r4t9\") pod \"neutron-c423-account-create-update-dcjc2\" (UID: \"06e5575b-c67a-46fe-8502-efc341523de2\") " pod="openstack/neutron-c423-account-create-update-dcjc2" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.784798 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c423-account-create-update-dcjc2" Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.878646 4730 generic.go:334] "Generic (PLEG): container finished" podID="e01c2575-5301-494a-bf47-9a6053de9c64" containerID="c35a1209cb7b3066725c4f8438840ab79db396745f3b69d5ee16580ca7ae88eb" exitCode=0 Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.879119 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3959-account-create-update-qxd89" event={"ID":"e01c2575-5301-494a-bf47-9a6053de9c64","Type":"ContainerDied","Data":"c35a1209cb7b3066725c4f8438840ab79db396745f3b69d5ee16580ca7ae88eb"} Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.879164 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3959-account-create-update-qxd89" event={"ID":"e01c2575-5301-494a-bf47-9a6053de9c64","Type":"ContainerStarted","Data":"88bb1e1b8804c326245e327cd045f9a813fc551a35052713e8b42dc063c0a8fa"} Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.881738 4730 generic.go:334] "Generic (PLEG): container finished" podID="ad93c0a8-34d6-4fee-985c-7c7307f00c0c" containerID="619c70ff24e78ebd6137bc20c79ee2dc5949bf1cca622b03e9fc4227379e48f4" exitCode=0 Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.881794 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9q2kz" event={"ID":"ad93c0a8-34d6-4fee-985c-7c7307f00c0c","Type":"ContainerDied","Data":"619c70ff24e78ebd6137bc20c79ee2dc5949bf1cca622b03e9fc4227379e48f4"} Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.888483 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rb4pw" event={"ID":"92a7eed8-de7c-4816-8bd9-e922ace376ad","Type":"ContainerStarted","Data":"15e39e3baf091f230b52a236e8e09de020ef95a8e3541076e0af482275abf9c1"} Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.915514 4730 generic.go:334] "Generic (PLEG): container finished" podID="6118ed31-b8d7-4a7c-8769-69d996d26915" containerID="a18801b5a50e28a1d043f07d02846b12496eaa787cc63d296052b7f86700e382" exitCode=0 Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.915598 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-87csx" event={"ID":"6118ed31-b8d7-4a7c-8769-69d996d26915","Type":"ContainerDied","Data":"a18801b5a50e28a1d043f07d02846b12496eaa787cc63d296052b7f86700e382"} Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.917994 4730 generic.go:334] "Generic (PLEG): container finished" podID="44a72513-75fb-4b7e-912b-d28fa63d050a" containerID="ecddce73fd871590be8e4104469454a63bf36c8d5e335fcb1236e7e17748fcf3" exitCode=0 Mar 20 15:59:41 crc kubenswrapper[4730]: I0320 15:59:41.918029 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9f59-account-create-update-vmg5j" event={"ID":"44a72513-75fb-4b7e-912b-d28fa63d050a","Type":"ContainerDied","Data":"ecddce73fd871590be8e4104469454a63bf36c8d5e335fcb1236e7e17748fcf3"} Mar 20 15:59:42 crc kubenswrapper[4730]: I0320 15:59:42.099502 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-ns6b5"] Mar 20 15:59:42 crc kubenswrapper[4730]: I0320 15:59:42.289226 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-qpb6s"] Mar 20 15:59:42 crc kubenswrapper[4730]: W0320 15:59:42.319980 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c94c6d8_4c40_455a_a536_7c64e3838986.slice/crio-4c5e1f649837c47339a572132afe7d468b5386426786fd3de46f1f95322e65e5 WatchSource:0}: Error finding container 4c5e1f649837c47339a572132afe7d468b5386426786fd3de46f1f95322e65e5: Status 404 returned error can't find the container with id 4c5e1f649837c47339a572132afe7d468b5386426786fd3de46f1f95322e65e5 Mar 20 15:59:42 crc kubenswrapper[4730]: I0320 15:59:42.385256 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c423-account-create-update-dcjc2"] Mar 20 15:59:42 crc kubenswrapper[4730]: I0320 15:59:42.932427 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c423-account-create-update-dcjc2" event={"ID":"06e5575b-c67a-46fe-8502-efc341523de2","Type":"ContainerStarted","Data":"99fba5e2cadd379521ca79369b155ec13b031c591917c4f1be4fc608956b6dda"} Mar 20 15:59:42 crc kubenswrapper[4730]: I0320 15:59:42.932780 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c423-account-create-update-dcjc2" event={"ID":"06e5575b-c67a-46fe-8502-efc341523de2","Type":"ContainerStarted","Data":"2876d90345315b2a0809c889345800e43703e0183ea1a884fd91389c2ca8ac2a"} Mar 20 15:59:42 crc kubenswrapper[4730]: I0320 15:59:42.934585 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qpb6s" event={"ID":"1c94c6d8-4c40-455a-a536-7c64e3838986","Type":"ContainerStarted","Data":"40e0babc7b2f63017ce242ba014b4798c26ae0c66070098c86ad2de5a7400e6c"} Mar 20 15:59:42 crc kubenswrapper[4730]: I0320 15:59:42.934623 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qpb6s" event={"ID":"1c94c6d8-4c40-455a-a536-7c64e3838986","Type":"ContainerStarted","Data":"4c5e1f649837c47339a572132afe7d468b5386426786fd3de46f1f95322e65e5"} Mar 20 15:59:42 crc kubenswrapper[4730]: I0320 15:59:42.936241 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-ns6b5" event={"ID":"9577f66b-a45e-4d51-9d87-4ae757819182","Type":"ContainerStarted","Data":"2a09ead92eb5e9f3f31718e3636e13dfb29ffdd6eee5420ea19a1b73c6cb9b82"} Mar 20 15:59:42 crc kubenswrapper[4730]: I0320 15:59:42.952800 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-c423-account-create-update-dcjc2" podStartSLOduration=1.952767946 podStartE2EDuration="1.952767946s" podCreationTimestamp="2026-03-20 15:59:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 15:59:42.949010732 +0000 UTC m=+1242.162382101" watchObservedRunningTime="2026-03-20 15:59:42.952767946 +0000 UTC m=+1242.166139315" Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.392830 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9q2kz" Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.563089 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad93c0a8-34d6-4fee-985c-7c7307f00c0c-operator-scripts\") pod \"ad93c0a8-34d6-4fee-985c-7c7307f00c0c\" (UID: \"ad93c0a8-34d6-4fee-985c-7c7307f00c0c\") " Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.563462 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnfnf\" (UniqueName: \"kubernetes.io/projected/ad93c0a8-34d6-4fee-985c-7c7307f00c0c-kube-api-access-gnfnf\") pod \"ad93c0a8-34d6-4fee-985c-7c7307f00c0c\" (UID: \"ad93c0a8-34d6-4fee-985c-7c7307f00c0c\") " Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.563693 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad93c0a8-34d6-4fee-985c-7c7307f00c0c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad93c0a8-34d6-4fee-985c-7c7307f00c0c" (UID: "ad93c0a8-34d6-4fee-985c-7c7307f00c0c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.563995 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad93c0a8-34d6-4fee-985c-7c7307f00c0c-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.569833 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad93c0a8-34d6-4fee-985c-7c7307f00c0c-kube-api-access-gnfnf" (OuterVolumeSpecName: "kube-api-access-gnfnf") pod "ad93c0a8-34d6-4fee-985c-7c7307f00c0c" (UID: "ad93c0a8-34d6-4fee-985c-7c7307f00c0c"). InnerVolumeSpecName "kube-api-access-gnfnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.589223 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-87csx" Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.599749 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9f59-account-create-update-vmg5j" Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.617288 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3959-account-create-update-qxd89" Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.665087 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6118ed31-b8d7-4a7c-8769-69d996d26915-operator-scripts\") pod \"6118ed31-b8d7-4a7c-8769-69d996d26915\" (UID: \"6118ed31-b8d7-4a7c-8769-69d996d26915\") " Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.665189 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc4nn\" (UniqueName: \"kubernetes.io/projected/6118ed31-b8d7-4a7c-8769-69d996d26915-kube-api-access-jc4nn\") pod \"6118ed31-b8d7-4a7c-8769-69d996d26915\" (UID: \"6118ed31-b8d7-4a7c-8769-69d996d26915\") " Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.665635 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6118ed31-b8d7-4a7c-8769-69d996d26915-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6118ed31-b8d7-4a7c-8769-69d996d26915" (UID: "6118ed31-b8d7-4a7c-8769-69d996d26915"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.665829 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6118ed31-b8d7-4a7c-8769-69d996d26915-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.665848 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnfnf\" (UniqueName: \"kubernetes.io/projected/ad93c0a8-34d6-4fee-985c-7c7307f00c0c-kube-api-access-gnfnf\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.668649 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6118ed31-b8d7-4a7c-8769-69d996d26915-kube-api-access-jc4nn" (OuterVolumeSpecName: "kube-api-access-jc4nn") pod "6118ed31-b8d7-4a7c-8769-69d996d26915" (UID: "6118ed31-b8d7-4a7c-8769-69d996d26915"). InnerVolumeSpecName "kube-api-access-jc4nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.767310 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44a72513-75fb-4b7e-912b-d28fa63d050a-operator-scripts\") pod \"44a72513-75fb-4b7e-912b-d28fa63d050a\" (UID: \"44a72513-75fb-4b7e-912b-d28fa63d050a\") " Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.767412 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2tsx\" (UniqueName: \"kubernetes.io/projected/e01c2575-5301-494a-bf47-9a6053de9c64-kube-api-access-z2tsx\") pod \"e01c2575-5301-494a-bf47-9a6053de9c64\" (UID: \"e01c2575-5301-494a-bf47-9a6053de9c64\") " Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.767444 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e01c2575-5301-494a-bf47-9a6053de9c64-operator-scripts\") pod \"e01c2575-5301-494a-bf47-9a6053de9c64\" (UID: \"e01c2575-5301-494a-bf47-9a6053de9c64\") " Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.767468 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmbg2\" (UniqueName: \"kubernetes.io/projected/44a72513-75fb-4b7e-912b-d28fa63d050a-kube-api-access-cmbg2\") pod \"44a72513-75fb-4b7e-912b-d28fa63d050a\" (UID: \"44a72513-75fb-4b7e-912b-d28fa63d050a\") " Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.767786 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44a72513-75fb-4b7e-912b-d28fa63d050a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "44a72513-75fb-4b7e-912b-d28fa63d050a" (UID: "44a72513-75fb-4b7e-912b-d28fa63d050a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.768103 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e01c2575-5301-494a-bf47-9a6053de9c64-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e01c2575-5301-494a-bf47-9a6053de9c64" (UID: "e01c2575-5301-494a-bf47-9a6053de9c64"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.768279 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e01c2575-5301-494a-bf47-9a6053de9c64-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.768293 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44a72513-75fb-4b7e-912b-d28fa63d050a-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.768303 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jc4nn\" (UniqueName: \"kubernetes.io/projected/6118ed31-b8d7-4a7c-8769-69d996d26915-kube-api-access-jc4nn\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.770459 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44a72513-75fb-4b7e-912b-d28fa63d050a-kube-api-access-cmbg2" (OuterVolumeSpecName: "kube-api-access-cmbg2") pod "44a72513-75fb-4b7e-912b-d28fa63d050a" (UID: "44a72513-75fb-4b7e-912b-d28fa63d050a"). InnerVolumeSpecName "kube-api-access-cmbg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.770829 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e01c2575-5301-494a-bf47-9a6053de9c64-kube-api-access-z2tsx" (OuterVolumeSpecName: "kube-api-access-z2tsx") pod "e01c2575-5301-494a-bf47-9a6053de9c64" (UID: "e01c2575-5301-494a-bf47-9a6053de9c64"). InnerVolumeSpecName "kube-api-access-z2tsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.869964 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2tsx\" (UniqueName: \"kubernetes.io/projected/e01c2575-5301-494a-bf47-9a6053de9c64-kube-api-access-z2tsx\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.869996 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmbg2\" (UniqueName: \"kubernetes.io/projected/44a72513-75fb-4b7e-912b-d28fa63d050a-kube-api-access-cmbg2\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.956154 4730 generic.go:334] "Generic (PLEG): container finished" podID="06e5575b-c67a-46fe-8502-efc341523de2" containerID="99fba5e2cadd379521ca79369b155ec13b031c591917c4f1be4fc608956b6dda" exitCode=0 Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.956226 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c423-account-create-update-dcjc2" event={"ID":"06e5575b-c67a-46fe-8502-efc341523de2","Type":"ContainerDied","Data":"99fba5e2cadd379521ca79369b155ec13b031c591917c4f1be4fc608956b6dda"} Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.960800 4730 generic.go:334] "Generic (PLEG): container finished" podID="1c94c6d8-4c40-455a-a536-7c64e3838986" containerID="40e0babc7b2f63017ce242ba014b4798c26ae0c66070098c86ad2de5a7400e6c" exitCode=0 Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.960867 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qpb6s" event={"ID":"1c94c6d8-4c40-455a-a536-7c64e3838986","Type":"ContainerDied","Data":"40e0babc7b2f63017ce242ba014b4798c26ae0c66070098c86ad2de5a7400e6c"} Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.967366 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-87csx" event={"ID":"6118ed31-b8d7-4a7c-8769-69d996d26915","Type":"ContainerDied","Data":"6d472469b6e16e310262d75432013a7a3a34ad4fdb01866ebdfaf12a061b5465"} Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.967425 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d472469b6e16e310262d75432013a7a3a34ad4fdb01866ebdfaf12a061b5465" Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.967374 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-87csx" Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.979929 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9f59-account-create-update-vmg5j" event={"ID":"44a72513-75fb-4b7e-912b-d28fa63d050a","Type":"ContainerDied","Data":"6a094396f0bb1056f88ceb040f7dab6620e9ea44941063752d5aa89e4f67a92a"} Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.979968 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a094396f0bb1056f88ceb040f7dab6620e9ea44941063752d5aa89e4f67a92a" Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.980029 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9f59-account-create-update-vmg5j" Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.999658 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3959-account-create-update-qxd89" Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.999651 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3959-account-create-update-qxd89" event={"ID":"e01c2575-5301-494a-bf47-9a6053de9c64","Type":"ContainerDied","Data":"88bb1e1b8804c326245e327cd045f9a813fc551a35052713e8b42dc063c0a8fa"} Mar 20 15:59:43 crc kubenswrapper[4730]: I0320 15:59:43.999715 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88bb1e1b8804c326245e327cd045f9a813fc551a35052713e8b42dc063c0a8fa" Mar 20 15:59:44 crc kubenswrapper[4730]: I0320 15:59:44.004227 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9q2kz" event={"ID":"ad93c0a8-34d6-4fee-985c-7c7307f00c0c","Type":"ContainerDied","Data":"6c62958dfa715bbaad2cb18642675288a24bc6d6b2b58edbfb494fdc8d1bb7c5"} Mar 20 15:59:44 crc kubenswrapper[4730]: I0320 15:59:44.004269 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c62958dfa715bbaad2cb18642675288a24bc6d6b2b58edbfb494fdc8d1bb7c5" Mar 20 15:59:44 crc kubenswrapper[4730]: I0320 15:59:44.004346 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9q2kz" Mar 20 15:59:44 crc kubenswrapper[4730]: I0320 15:59:44.326290 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qpb6s" Mar 20 15:59:44 crc kubenswrapper[4730]: I0320 15:59:44.480154 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c94c6d8-4c40-455a-a536-7c64e3838986-operator-scripts\") pod \"1c94c6d8-4c40-455a-a536-7c64e3838986\" (UID: \"1c94c6d8-4c40-455a-a536-7c64e3838986\") " Mar 20 15:59:44 crc kubenswrapper[4730]: I0320 15:59:44.480204 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h52h\" (UniqueName: \"kubernetes.io/projected/1c94c6d8-4c40-455a-a536-7c64e3838986-kube-api-access-8h52h\") pod \"1c94c6d8-4c40-455a-a536-7c64e3838986\" (UID: \"1c94c6d8-4c40-455a-a536-7c64e3838986\") " Mar 20 15:59:44 crc kubenswrapper[4730]: I0320 15:59:44.480885 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c94c6d8-4c40-455a-a536-7c64e3838986-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1c94c6d8-4c40-455a-a536-7c64e3838986" (UID: "1c94c6d8-4c40-455a-a536-7c64e3838986"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:59:44 crc kubenswrapper[4730]: I0320 15:59:44.484888 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c94c6d8-4c40-455a-a536-7c64e3838986-kube-api-access-8h52h" (OuterVolumeSpecName: "kube-api-access-8h52h") pod "1c94c6d8-4c40-455a-a536-7c64e3838986" (UID: "1c94c6d8-4c40-455a-a536-7c64e3838986"). InnerVolumeSpecName "kube-api-access-8h52h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:59:44 crc kubenswrapper[4730]: I0320 15:59:44.581984 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c94c6d8-4c40-455a-a536-7c64e3838986-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:44 crc kubenswrapper[4730]: I0320 15:59:44.582025 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h52h\" (UniqueName: \"kubernetes.io/projected/1c94c6d8-4c40-455a-a536-7c64e3838986-kube-api-access-8h52h\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:45 crc kubenswrapper[4730]: I0320 15:59:45.021632 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qpb6s" event={"ID":"1c94c6d8-4c40-455a-a536-7c64e3838986","Type":"ContainerDied","Data":"4c5e1f649837c47339a572132afe7d468b5386426786fd3de46f1f95322e65e5"} Mar 20 15:59:45 crc kubenswrapper[4730]: I0320 15:59:45.021691 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c5e1f649837c47339a572132afe7d468b5386426786fd3de46f1f95322e65e5" Mar 20 15:59:45 crc kubenswrapper[4730]: I0320 15:59:45.021736 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qpb6s" Mar 20 15:59:46 crc kubenswrapper[4730]: I0320 15:59:46.422409 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" Mar 20 15:59:46 crc kubenswrapper[4730]: I0320 15:59:46.473809 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68f7d4448c-cvqk4"] Mar 20 15:59:46 crc kubenswrapper[4730]: I0320 15:59:46.474065 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" podUID="3ae0793d-af8a-4808-b632-9f8b22a4d0c0" containerName="dnsmasq-dns" containerID="cri-o://cca89ebad14e458072774d3863e92e5ae8961f7af2fa95160875c04d8dd584a6" gracePeriod=10 Mar 20 15:59:47 crc kubenswrapper[4730]: I0320 15:59:47.042051 4730 generic.go:334] "Generic (PLEG): container finished" podID="3ae0793d-af8a-4808-b632-9f8b22a4d0c0" containerID="cca89ebad14e458072774d3863e92e5ae8961f7af2fa95160875c04d8dd584a6" exitCode=0 Mar 20 15:59:47 crc kubenswrapper[4730]: I0320 15:59:47.042114 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" event={"ID":"3ae0793d-af8a-4808-b632-9f8b22a4d0c0","Type":"ContainerDied","Data":"cca89ebad14e458072774d3863e92e5ae8961f7af2fa95160875c04d8dd584a6"} Mar 20 15:59:47 crc kubenswrapper[4730]: I0320 15:59:47.933687 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c423-account-create-update-dcjc2" Mar 20 15:59:48 crc kubenswrapper[4730]: I0320 15:59:48.036447 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7r4t9\" (UniqueName: \"kubernetes.io/projected/06e5575b-c67a-46fe-8502-efc341523de2-kube-api-access-7r4t9\") pod \"06e5575b-c67a-46fe-8502-efc341523de2\" (UID: \"06e5575b-c67a-46fe-8502-efc341523de2\") " Mar 20 15:59:48 crc kubenswrapper[4730]: I0320 15:59:48.036505 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06e5575b-c67a-46fe-8502-efc341523de2-operator-scripts\") pod \"06e5575b-c67a-46fe-8502-efc341523de2\" (UID: \"06e5575b-c67a-46fe-8502-efc341523de2\") " Mar 20 15:59:48 crc kubenswrapper[4730]: I0320 15:59:48.037100 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06e5575b-c67a-46fe-8502-efc341523de2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "06e5575b-c67a-46fe-8502-efc341523de2" (UID: "06e5575b-c67a-46fe-8502-efc341523de2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:59:48 crc kubenswrapper[4730]: I0320 15:59:48.037224 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06e5575b-c67a-46fe-8502-efc341523de2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:48 crc kubenswrapper[4730]: I0320 15:59:48.043737 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06e5575b-c67a-46fe-8502-efc341523de2-kube-api-access-7r4t9" (OuterVolumeSpecName: "kube-api-access-7r4t9") pod "06e5575b-c67a-46fe-8502-efc341523de2" (UID: "06e5575b-c67a-46fe-8502-efc341523de2"). InnerVolumeSpecName "kube-api-access-7r4t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:59:48 crc kubenswrapper[4730]: I0320 15:59:48.053855 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c423-account-create-update-dcjc2" event={"ID":"06e5575b-c67a-46fe-8502-efc341523de2","Type":"ContainerDied","Data":"2876d90345315b2a0809c889345800e43703e0183ea1a884fd91389c2ca8ac2a"} Mar 20 15:59:48 crc kubenswrapper[4730]: I0320 15:59:48.053988 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2876d90345315b2a0809c889345800e43703e0183ea1a884fd91389c2ca8ac2a" Mar 20 15:59:48 crc kubenswrapper[4730]: I0320 15:59:48.054112 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c423-account-create-update-dcjc2" Mar 20 15:59:48 crc kubenswrapper[4730]: I0320 15:59:48.138603 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7r4t9\" (UniqueName: \"kubernetes.io/projected/06e5575b-c67a-46fe-8502-efc341523de2-kube-api-access-7r4t9\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:51 crc kubenswrapper[4730]: I0320 15:59:51.287777 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" Mar 20 15:59:51 crc kubenswrapper[4730]: I0320 15:59:51.398469 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-config\") pod \"3ae0793d-af8a-4808-b632-9f8b22a4d0c0\" (UID: \"3ae0793d-af8a-4808-b632-9f8b22a4d0c0\") " Mar 20 15:59:51 crc kubenswrapper[4730]: I0320 15:59:51.398548 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-dns-svc\") pod \"3ae0793d-af8a-4808-b632-9f8b22a4d0c0\" (UID: \"3ae0793d-af8a-4808-b632-9f8b22a4d0c0\") " Mar 20 15:59:51 crc kubenswrapper[4730]: I0320 15:59:51.398586 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-ovsdbserver-sb\") pod \"3ae0793d-af8a-4808-b632-9f8b22a4d0c0\" (UID: \"3ae0793d-af8a-4808-b632-9f8b22a4d0c0\") " Mar 20 15:59:51 crc kubenswrapper[4730]: I0320 15:59:51.398609 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-ovsdbserver-nb\") pod \"3ae0793d-af8a-4808-b632-9f8b22a4d0c0\" (UID: \"3ae0793d-af8a-4808-b632-9f8b22a4d0c0\") " Mar 20 15:59:51 crc kubenswrapper[4730]: I0320 15:59:51.399346 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hslv\" (UniqueName: \"kubernetes.io/projected/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-kube-api-access-9hslv\") pod \"3ae0793d-af8a-4808-b632-9f8b22a4d0c0\" (UID: \"3ae0793d-af8a-4808-b632-9f8b22a4d0c0\") " Mar 20 15:59:51 crc kubenswrapper[4730]: I0320 15:59:51.403794 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-kube-api-access-9hslv" (OuterVolumeSpecName: "kube-api-access-9hslv") pod "3ae0793d-af8a-4808-b632-9f8b22a4d0c0" (UID: "3ae0793d-af8a-4808-b632-9f8b22a4d0c0"). InnerVolumeSpecName "kube-api-access-9hslv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:59:51 crc kubenswrapper[4730]: I0320 15:59:51.438832 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3ae0793d-af8a-4808-b632-9f8b22a4d0c0" (UID: "3ae0793d-af8a-4808-b632-9f8b22a4d0c0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:59:51 crc kubenswrapper[4730]: I0320 15:59:51.441623 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3ae0793d-af8a-4808-b632-9f8b22a4d0c0" (UID: "3ae0793d-af8a-4808-b632-9f8b22a4d0c0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:59:51 crc kubenswrapper[4730]: I0320 15:59:51.443163 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-config" (OuterVolumeSpecName: "config") pod "3ae0793d-af8a-4808-b632-9f8b22a4d0c0" (UID: "3ae0793d-af8a-4808-b632-9f8b22a4d0c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:59:51 crc kubenswrapper[4730]: I0320 15:59:51.444418 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3ae0793d-af8a-4808-b632-9f8b22a4d0c0" (UID: "3ae0793d-af8a-4808-b632-9f8b22a4d0c0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 15:59:51 crc kubenswrapper[4730]: I0320 15:59:51.501396 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-config\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:51 crc kubenswrapper[4730]: I0320 15:59:51.501430 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:51 crc kubenswrapper[4730]: I0320 15:59:51.501444 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:51 crc kubenswrapper[4730]: I0320 15:59:51.501454 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:51 crc kubenswrapper[4730]: I0320 15:59:51.501465 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hslv\" (UniqueName: \"kubernetes.io/projected/3ae0793d-af8a-4808-b632-9f8b22a4d0c0-kube-api-access-9hslv\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:52 crc kubenswrapper[4730]: I0320 15:59:52.105948 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" event={"ID":"3ae0793d-af8a-4808-b632-9f8b22a4d0c0","Type":"ContainerDied","Data":"12c589b54c6d611757ef6559c9af4693c0c76b7c0e6f07d2b8ae5108a83f1489"} Mar 20 15:59:52 crc kubenswrapper[4730]: I0320 15:59:52.106015 4730 scope.go:117] "RemoveContainer" containerID="cca89ebad14e458072774d3863e92e5ae8961f7af2fa95160875c04d8dd584a6" Mar 20 15:59:52 crc kubenswrapper[4730]: I0320 15:59:52.106178 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" Mar 20 15:59:52 crc kubenswrapper[4730]: I0320 15:59:52.114507 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-ns6b5" event={"ID":"9577f66b-a45e-4d51-9d87-4ae757819182","Type":"ContainerStarted","Data":"32fe76fbff47bfdd3ed0a42b1fb587052917346b2dd9af6a6803fc8251d250e7"} Mar 20 15:59:52 crc kubenswrapper[4730]: I0320 15:59:52.117210 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rb4pw" event={"ID":"92a7eed8-de7c-4816-8bd9-e922ace376ad","Type":"ContainerStarted","Data":"29ed2b28b91aee9b1496fc9ae566fd345c663655b5eba7831621c42547aa8e83"} Mar 20 15:59:52 crc kubenswrapper[4730]: I0320 15:59:52.146284 4730 scope.go:117] "RemoveContainer" containerID="ed9c122c62fe6334e758d54aed5e4b1a868adadd18c7ec06bf5ae96e604dfa76" Mar 20 15:59:52 crc kubenswrapper[4730]: I0320 15:59:52.186681 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-ns6b5" podStartSLOduration=1.482184874 podStartE2EDuration="11.186660089s" podCreationTimestamp="2026-03-20 15:59:41 +0000 UTC" firstStartedPulling="2026-03-20 15:59:42.122381947 +0000 UTC m=+1241.335753316" lastFinishedPulling="2026-03-20 15:59:51.826857152 +0000 UTC m=+1251.040228531" observedRunningTime="2026-03-20 15:59:52.143568835 +0000 UTC m=+1251.356940204" watchObservedRunningTime="2026-03-20 15:59:52.186660089 +0000 UTC m=+1251.400031458" Mar 20 15:59:52 crc kubenswrapper[4730]: I0320 15:59:52.200047 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68f7d4448c-cvqk4"] Mar 20 15:59:52 crc kubenswrapper[4730]: I0320 15:59:52.207406 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68f7d4448c-cvqk4"] Mar 20 15:59:52 crc kubenswrapper[4730]: I0320 15:59:52.209802 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-rb4pw" podStartSLOduration=1.310520081 podStartE2EDuration="12.209784336s" podCreationTimestamp="2026-03-20 15:59:40 +0000 UTC" firstStartedPulling="2026-03-20 15:59:40.906806152 +0000 UTC m=+1240.120177551" lastFinishedPulling="2026-03-20 15:59:51.806070417 +0000 UTC m=+1251.019441806" observedRunningTime="2026-03-20 15:59:52.196766935 +0000 UTC m=+1251.410138304" watchObservedRunningTime="2026-03-20 15:59:52.209784336 +0000 UTC m=+1251.423155705" Mar 20 15:59:53 crc kubenswrapper[4730]: I0320 15:59:53.547558 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ae0793d-af8a-4808-b632-9f8b22a4d0c0" path="/var/lib/kubelet/pods/3ae0793d-af8a-4808-b632-9f8b22a4d0c0/volumes" Mar 20 15:59:55 crc kubenswrapper[4730]: I0320 15:59:55.642571 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68f7d4448c-cvqk4" podUID="3ae0793d-af8a-4808-b632-9f8b22a4d0c0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: i/o timeout" Mar 20 15:59:56 crc kubenswrapper[4730]: I0320 15:59:56.158460 4730 generic.go:334] "Generic (PLEG): container finished" podID="9577f66b-a45e-4d51-9d87-4ae757819182" containerID="32fe76fbff47bfdd3ed0a42b1fb587052917346b2dd9af6a6803fc8251d250e7" exitCode=0 Mar 20 15:59:56 crc kubenswrapper[4730]: I0320 15:59:56.158522 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-ns6b5" event={"ID":"9577f66b-a45e-4d51-9d87-4ae757819182","Type":"ContainerDied","Data":"32fe76fbff47bfdd3ed0a42b1fb587052917346b2dd9af6a6803fc8251d250e7"} Mar 20 15:59:57 crc kubenswrapper[4730]: I0320 15:59:57.168024 4730 generic.go:334] "Generic (PLEG): container finished" podID="92a7eed8-de7c-4816-8bd9-e922ace376ad" containerID="29ed2b28b91aee9b1496fc9ae566fd345c663655b5eba7831621c42547aa8e83" exitCode=0 Mar 20 15:59:57 crc kubenswrapper[4730]: I0320 15:59:57.168151 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rb4pw" event={"ID":"92a7eed8-de7c-4816-8bd9-e922ace376ad","Type":"ContainerDied","Data":"29ed2b28b91aee9b1496fc9ae566fd345c663655b5eba7831621c42547aa8e83"} Mar 20 15:59:57 crc kubenswrapper[4730]: I0320 15:59:57.480969 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-ns6b5" Mar 20 15:59:57 crc kubenswrapper[4730]: I0320 15:59:57.608891 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pz9h\" (UniqueName: \"kubernetes.io/projected/9577f66b-a45e-4d51-9d87-4ae757819182-kube-api-access-9pz9h\") pod \"9577f66b-a45e-4d51-9d87-4ae757819182\" (UID: \"9577f66b-a45e-4d51-9d87-4ae757819182\") " Mar 20 15:59:57 crc kubenswrapper[4730]: I0320 15:59:57.608990 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9577f66b-a45e-4d51-9d87-4ae757819182-config-data\") pod \"9577f66b-a45e-4d51-9d87-4ae757819182\" (UID: \"9577f66b-a45e-4d51-9d87-4ae757819182\") " Mar 20 15:59:57 crc kubenswrapper[4730]: I0320 15:59:57.609106 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9577f66b-a45e-4d51-9d87-4ae757819182-db-sync-config-data\") pod \"9577f66b-a45e-4d51-9d87-4ae757819182\" (UID: \"9577f66b-a45e-4d51-9d87-4ae757819182\") " Mar 20 15:59:57 crc kubenswrapper[4730]: I0320 15:59:57.609208 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9577f66b-a45e-4d51-9d87-4ae757819182-combined-ca-bundle\") pod \"9577f66b-a45e-4d51-9d87-4ae757819182\" (UID: \"9577f66b-a45e-4d51-9d87-4ae757819182\") " Mar 20 15:59:57 crc kubenswrapper[4730]: I0320 15:59:57.615142 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9577f66b-a45e-4d51-9d87-4ae757819182-kube-api-access-9pz9h" (OuterVolumeSpecName: "kube-api-access-9pz9h") pod "9577f66b-a45e-4d51-9d87-4ae757819182" (UID: "9577f66b-a45e-4d51-9d87-4ae757819182"). InnerVolumeSpecName "kube-api-access-9pz9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:59:57 crc kubenswrapper[4730]: I0320 15:59:57.616497 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9577f66b-a45e-4d51-9d87-4ae757819182-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9577f66b-a45e-4d51-9d87-4ae757819182" (UID: "9577f66b-a45e-4d51-9d87-4ae757819182"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:59:57 crc kubenswrapper[4730]: I0320 15:59:57.642146 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9577f66b-a45e-4d51-9d87-4ae757819182-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9577f66b-a45e-4d51-9d87-4ae757819182" (UID: "9577f66b-a45e-4d51-9d87-4ae757819182"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:59:57 crc kubenswrapper[4730]: I0320 15:59:57.668023 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9577f66b-a45e-4d51-9d87-4ae757819182-config-data" (OuterVolumeSpecName: "config-data") pod "9577f66b-a45e-4d51-9d87-4ae757819182" (UID: "9577f66b-a45e-4d51-9d87-4ae757819182"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:59:57 crc kubenswrapper[4730]: I0320 15:59:57.712991 4730 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9577f66b-a45e-4d51-9d87-4ae757819182-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:57 crc kubenswrapper[4730]: I0320 15:59:57.713033 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9577f66b-a45e-4d51-9d87-4ae757819182-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:57 crc kubenswrapper[4730]: I0320 15:59:57.713047 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pz9h\" (UniqueName: \"kubernetes.io/projected/9577f66b-a45e-4d51-9d87-4ae757819182-kube-api-access-9pz9h\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:57 crc kubenswrapper[4730]: I0320 15:59:57.713061 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9577f66b-a45e-4d51-9d87-4ae757819182-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:58 crc kubenswrapper[4730]: I0320 15:59:58.182897 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-ns6b5" Mar 20 15:59:58 crc kubenswrapper[4730]: I0320 15:59:58.182859 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-ns6b5" event={"ID":"9577f66b-a45e-4d51-9d87-4ae757819182","Type":"ContainerDied","Data":"2a09ead92eb5e9f3f31718e3636e13dfb29ffdd6eee5420ea19a1b73c6cb9b82"} Mar 20 15:59:58 crc kubenswrapper[4730]: I0320 15:59:58.182999 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a09ead92eb5e9f3f31718e3636e13dfb29ffdd6eee5420ea19a1b73c6cb9b82" Mar 20 15:59:58 crc kubenswrapper[4730]: I0320 15:59:58.609719 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rb4pw" Mar 20 15:59:58 crc kubenswrapper[4730]: I0320 15:59:58.756795 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92a7eed8-de7c-4816-8bd9-e922ace376ad-config-data\") pod \"92a7eed8-de7c-4816-8bd9-e922ace376ad\" (UID: \"92a7eed8-de7c-4816-8bd9-e922ace376ad\") " Mar 20 15:59:58 crc kubenswrapper[4730]: I0320 15:59:58.757059 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92a7eed8-de7c-4816-8bd9-e922ace376ad-combined-ca-bundle\") pod \"92a7eed8-de7c-4816-8bd9-e922ace376ad\" (UID: \"92a7eed8-de7c-4816-8bd9-e922ace376ad\") " Mar 20 15:59:58 crc kubenswrapper[4730]: I0320 15:59:58.757538 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4hmq\" (UniqueName: \"kubernetes.io/projected/92a7eed8-de7c-4816-8bd9-e922ace376ad-kube-api-access-h4hmq\") pod \"92a7eed8-de7c-4816-8bd9-e922ace376ad\" (UID: \"92a7eed8-de7c-4816-8bd9-e922ace376ad\") " Mar 20 15:59:58 crc kubenswrapper[4730]: I0320 15:59:58.761961 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92a7eed8-de7c-4816-8bd9-e922ace376ad-kube-api-access-h4hmq" (OuterVolumeSpecName: "kube-api-access-h4hmq") pod "92a7eed8-de7c-4816-8bd9-e922ace376ad" (UID: "92a7eed8-de7c-4816-8bd9-e922ace376ad"). InnerVolumeSpecName "kube-api-access-h4hmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 15:59:58 crc kubenswrapper[4730]: I0320 15:59:58.782699 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92a7eed8-de7c-4816-8bd9-e922ace376ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92a7eed8-de7c-4816-8bd9-e922ace376ad" (UID: "92a7eed8-de7c-4816-8bd9-e922ace376ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:59:58 crc kubenswrapper[4730]: I0320 15:59:58.796782 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92a7eed8-de7c-4816-8bd9-e922ace376ad-config-data" (OuterVolumeSpecName: "config-data") pod "92a7eed8-de7c-4816-8bd9-e922ace376ad" (UID: "92a7eed8-de7c-4816-8bd9-e922ace376ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 15:59:58 crc kubenswrapper[4730]: I0320 15:59:58.859827 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92a7eed8-de7c-4816-8bd9-e922ace376ad-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:58 crc kubenswrapper[4730]: I0320 15:59:58.859858 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92a7eed8-de7c-4816-8bd9-e922ace376ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:58 crc kubenswrapper[4730]: I0320 15:59:58.859870 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4hmq\" (UniqueName: \"kubernetes.io/projected/92a7eed8-de7c-4816-8bd9-e922ace376ad-kube-api-access-h4hmq\") on node \"crc\" DevicePath \"\"" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.191896 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rb4pw" event={"ID":"92a7eed8-de7c-4816-8bd9-e922ace376ad","Type":"ContainerDied","Data":"15e39e3baf091f230b52a236e8e09de020ef95a8e3541076e0af482275abf9c1"} Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.191950 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rb4pw" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.191959 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15e39e3baf091f230b52a236e8e09de020ef95a8e3541076e0af482275abf9c1" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.424396 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-rmtsq"] Mar 20 15:59:59 crc kubenswrapper[4730]: E0320 15:59:59.425181 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ae0793d-af8a-4808-b632-9f8b22a4d0c0" containerName="init" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.425210 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ae0793d-af8a-4808-b632-9f8b22a4d0c0" containerName="init" Mar 20 15:59:59 crc kubenswrapper[4730]: E0320 15:59:59.425239 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44a72513-75fb-4b7e-912b-d28fa63d050a" containerName="mariadb-account-create-update" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.425268 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="44a72513-75fb-4b7e-912b-d28fa63d050a" containerName="mariadb-account-create-update" Mar 20 15:59:59 crc kubenswrapper[4730]: E0320 15:59:59.425285 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92a7eed8-de7c-4816-8bd9-e922ace376ad" containerName="keystone-db-sync" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.425293 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="92a7eed8-de7c-4816-8bd9-e922ace376ad" containerName="keystone-db-sync" Mar 20 15:59:59 crc kubenswrapper[4730]: E0320 15:59:59.425311 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ae0793d-af8a-4808-b632-9f8b22a4d0c0" containerName="dnsmasq-dns" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.425319 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ae0793d-af8a-4808-b632-9f8b22a4d0c0" containerName="dnsmasq-dns" Mar 20 15:59:59 crc kubenswrapper[4730]: E0320 15:59:59.425343 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e5575b-c67a-46fe-8502-efc341523de2" containerName="mariadb-account-create-update" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.425352 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e5575b-c67a-46fe-8502-efc341523de2" containerName="mariadb-account-create-update" Mar 20 15:59:59 crc kubenswrapper[4730]: E0320 15:59:59.425363 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c94c6d8-4c40-455a-a536-7c64e3838986" containerName="mariadb-database-create" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.425371 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c94c6d8-4c40-455a-a536-7c64e3838986" containerName="mariadb-database-create" Mar 20 15:59:59 crc kubenswrapper[4730]: E0320 15:59:59.425381 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e01c2575-5301-494a-bf47-9a6053de9c64" containerName="mariadb-account-create-update" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.425389 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e01c2575-5301-494a-bf47-9a6053de9c64" containerName="mariadb-account-create-update" Mar 20 15:59:59 crc kubenswrapper[4730]: E0320 15:59:59.425404 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad93c0a8-34d6-4fee-985c-7c7307f00c0c" containerName="mariadb-database-create" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.425412 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad93c0a8-34d6-4fee-985c-7c7307f00c0c" containerName="mariadb-database-create" Mar 20 15:59:59 crc kubenswrapper[4730]: E0320 15:59:59.425423 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9577f66b-a45e-4d51-9d87-4ae757819182" containerName="watcher-db-sync" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.425431 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="9577f66b-a45e-4d51-9d87-4ae757819182" containerName="watcher-db-sync" Mar 20 15:59:59 crc kubenswrapper[4730]: E0320 15:59:59.425446 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6118ed31-b8d7-4a7c-8769-69d996d26915" containerName="mariadb-database-create" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.425455 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="6118ed31-b8d7-4a7c-8769-69d996d26915" containerName="mariadb-database-create" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.425654 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="06e5575b-c67a-46fe-8502-efc341523de2" containerName="mariadb-account-create-update" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.425684 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="92a7eed8-de7c-4816-8bd9-e922ace376ad" containerName="keystone-db-sync" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.425715 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="44a72513-75fb-4b7e-912b-d28fa63d050a" containerName="mariadb-account-create-update" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.425742 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="9577f66b-a45e-4d51-9d87-4ae757819182" containerName="watcher-db-sync" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.425754 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="e01c2575-5301-494a-bf47-9a6053de9c64" containerName="mariadb-account-create-update" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.425774 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="6118ed31-b8d7-4a7c-8769-69d996d26915" containerName="mariadb-database-create" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.425794 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ae0793d-af8a-4808-b632-9f8b22a4d0c0" containerName="dnsmasq-dns" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.425806 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad93c0a8-34d6-4fee-985c-7c7307f00c0c" containerName="mariadb-database-create" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.425825 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c94c6d8-4c40-455a-a536-7c64e3838986" containerName="mariadb-database-create" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.426630 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rmtsq" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.430449 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.430524 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.430619 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pjvk4" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.432856 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.435338 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-758cdcd5c9-9m42d"] Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.437189 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-758cdcd5c9-9m42d" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.438761 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.446951 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-758cdcd5c9-9m42d"] Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.573564 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-scripts\") pod \"keystone-bootstrap-rmtsq\" (UID: \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\") " pod="openstack/keystone-bootstrap-rmtsq" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.573619 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-ovsdbserver-nb\") pod \"dnsmasq-dns-758cdcd5c9-9m42d\" (UID: \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\") " pod="openstack/dnsmasq-dns-758cdcd5c9-9m42d" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.573660 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4d8h\" (UniqueName: \"kubernetes.io/projected/c857c34a-0efc-4ebe-8f42-e562e88de7a4-kube-api-access-q4d8h\") pod \"dnsmasq-dns-758cdcd5c9-9m42d\" (UID: \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\") " pod="openstack/dnsmasq-dns-758cdcd5c9-9m42d" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.573703 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-config-data\") pod \"keystone-bootstrap-rmtsq\" (UID: \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\") " pod="openstack/keystone-bootstrap-rmtsq" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.573723 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj56l\" (UniqueName: \"kubernetes.io/projected/e3349c2c-6f29-425d-9d25-b4f23821cfcc-kube-api-access-qj56l\") pod \"keystone-bootstrap-rmtsq\" (UID: \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\") " pod="openstack/keystone-bootstrap-rmtsq" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.573775 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-combined-ca-bundle\") pod \"keystone-bootstrap-rmtsq\" (UID: \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\") " pod="openstack/keystone-bootstrap-rmtsq" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.573798 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-config\") pod \"dnsmasq-dns-758cdcd5c9-9m42d\" (UID: \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\") " pod="openstack/dnsmasq-dns-758cdcd5c9-9m42d" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.573816 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-ovsdbserver-sb\") pod \"dnsmasq-dns-758cdcd5c9-9m42d\" (UID: \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\") " pod="openstack/dnsmasq-dns-758cdcd5c9-9m42d" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.573837 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-dns-swift-storage-0\") pod \"dnsmasq-dns-758cdcd5c9-9m42d\" (UID: \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\") " pod="openstack/dnsmasq-dns-758cdcd5c9-9m42d" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.573870 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-credential-keys\") pod \"keystone-bootstrap-rmtsq\" (UID: \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\") " pod="openstack/keystone-bootstrap-rmtsq" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.573912 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-fernet-keys\") pod \"keystone-bootstrap-rmtsq\" (UID: \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\") " pod="openstack/keystone-bootstrap-rmtsq" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.573967 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-dns-svc\") pod \"dnsmasq-dns-758cdcd5c9-9m42d\" (UID: \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\") " pod="openstack/dnsmasq-dns-758cdcd5c9-9m42d" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.597631 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rmtsq"] Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.624548 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.625727 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.632139 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-h4zsn" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.632719 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.638331 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.639458 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.645465 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.653977 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.675811 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-config-data\") pod \"keystone-bootstrap-rmtsq\" (UID: \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\") " pod="openstack/keystone-bootstrap-rmtsq" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.675865 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj56l\" (UniqueName: \"kubernetes.io/projected/e3349c2c-6f29-425d-9d25-b4f23821cfcc-kube-api-access-qj56l\") pod \"keystone-bootstrap-rmtsq\" (UID: \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\") " pod="openstack/keystone-bootstrap-rmtsq" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.675936 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-combined-ca-bundle\") pod \"keystone-bootstrap-rmtsq\" (UID: \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\") " pod="openstack/keystone-bootstrap-rmtsq" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.675971 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-config\") pod \"dnsmasq-dns-758cdcd5c9-9m42d\" (UID: \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\") " pod="openstack/dnsmasq-dns-758cdcd5c9-9m42d" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.675994 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-ovsdbserver-sb\") pod \"dnsmasq-dns-758cdcd5c9-9m42d\" (UID: \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\") " pod="openstack/dnsmasq-dns-758cdcd5c9-9m42d" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.676017 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-dns-swift-storage-0\") pod \"dnsmasq-dns-758cdcd5c9-9m42d\" (UID: \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\") " pod="openstack/dnsmasq-dns-758cdcd5c9-9m42d" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.676049 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-credential-keys\") pod \"keystone-bootstrap-rmtsq\" (UID: \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\") " pod="openstack/keystone-bootstrap-rmtsq" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.676095 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-fernet-keys\") pod \"keystone-bootstrap-rmtsq\" (UID: \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\") " pod="openstack/keystone-bootstrap-rmtsq" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.676167 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-dns-svc\") pod \"dnsmasq-dns-758cdcd5c9-9m42d\" (UID: \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\") " pod="openstack/dnsmasq-dns-758cdcd5c9-9m42d" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.676261 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-scripts\") pod \"keystone-bootstrap-rmtsq\" (UID: \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\") " pod="openstack/keystone-bootstrap-rmtsq" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.676292 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-ovsdbserver-nb\") pod \"dnsmasq-dns-758cdcd5c9-9m42d\" (UID: \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\") " pod="openstack/dnsmasq-dns-758cdcd5c9-9m42d" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.676333 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4d8h\" (UniqueName: \"kubernetes.io/projected/c857c34a-0efc-4ebe-8f42-e562e88de7a4-kube-api-access-q4d8h\") pod \"dnsmasq-dns-758cdcd5c9-9m42d\" (UID: \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\") " pod="openstack/dnsmasq-dns-758cdcd5c9-9m42d" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.679306 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.681329 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-config\") pod \"dnsmasq-dns-758cdcd5c9-9m42d\" (UID: \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\") " pod="openstack/dnsmasq-dns-758cdcd5c9-9m42d" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.683143 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-dns-swift-storage-0\") pod \"dnsmasq-dns-758cdcd5c9-9m42d\" (UID: \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\") " pod="openstack/dnsmasq-dns-758cdcd5c9-9m42d" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.688839 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-dns-svc\") pod \"dnsmasq-dns-758cdcd5c9-9m42d\" (UID: \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\") " pod="openstack/dnsmasq-dns-758cdcd5c9-9m42d" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.693560 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-ovsdbserver-sb\") pod \"dnsmasq-dns-758cdcd5c9-9m42d\" (UID: \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\") " pod="openstack/dnsmasq-dns-758cdcd5c9-9m42d" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.694301 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-ovsdbserver-nb\") pod \"dnsmasq-dns-758cdcd5c9-9m42d\" (UID: \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\") " pod="openstack/dnsmasq-dns-758cdcd5c9-9m42d" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.695044 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-credential-keys\") pod \"keystone-bootstrap-rmtsq\" (UID: \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\") " pod="openstack/keystone-bootstrap-rmtsq" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.700941 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-combined-ca-bundle\") pod \"keystone-bootstrap-rmtsq\" (UID: \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\") " pod="openstack/keystone-bootstrap-rmtsq" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.712619 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.714063 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.722600 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-scripts\") pod \"keystone-bootstrap-rmtsq\" (UID: \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\") " pod="openstack/keystone-bootstrap-rmtsq" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.727851 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-fernet-keys\") pod \"keystone-bootstrap-rmtsq\" (UID: \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\") " pod="openstack/keystone-bootstrap-rmtsq" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.728380 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-config-data\") pod \"keystone-bootstrap-rmtsq\" (UID: \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\") " pod="openstack/keystone-bootstrap-rmtsq" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.728725 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.730740 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4d8h\" (UniqueName: \"kubernetes.io/projected/c857c34a-0efc-4ebe-8f42-e562e88de7a4-kube-api-access-q4d8h\") pod \"dnsmasq-dns-758cdcd5c9-9m42d\" (UID: \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\") " pod="openstack/dnsmasq-dns-758cdcd5c9-9m42d" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.736440 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.738812 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj56l\" (UniqueName: \"kubernetes.io/projected/e3349c2c-6f29-425d-9d25-b4f23821cfcc-kube-api-access-qj56l\") pod \"keystone-bootstrap-rmtsq\" (UID: \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\") " pod="openstack/keystone-bootstrap-rmtsq" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.764669 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rmtsq" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.779341 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f653a7b-251e-4eb2-92cd-74e23ac4dba5-logs\") pod \"watcher-applier-0\" (UID: \"9f653a7b-251e-4eb2-92cd-74e23ac4dba5\") " pod="openstack/watcher-applier-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.779384 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z2tj\" (UniqueName: \"kubernetes.io/projected/3f6c808e-d523-48bd-8ec2-28b625834317-kube-api-access-2z2tj\") pod \"watcher-decision-engine-0\" (UID: \"3f6c808e-d523-48bd-8ec2-28b625834317\") " pod="openstack/watcher-decision-engine-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.779405 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f653a7b-251e-4eb2-92cd-74e23ac4dba5-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"9f653a7b-251e-4eb2-92cd-74e23ac4dba5\") " pod="openstack/watcher-applier-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.779442 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3f6c808e-d523-48bd-8ec2-28b625834317-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"3f6c808e-d523-48bd-8ec2-28b625834317\") " pod="openstack/watcher-decision-engine-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.779463 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f6c808e-d523-48bd-8ec2-28b625834317-logs\") pod \"watcher-decision-engine-0\" (UID: \"3f6c808e-d523-48bd-8ec2-28b625834317\") " pod="openstack/watcher-decision-engine-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.779490 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52bmh\" (UniqueName: \"kubernetes.io/projected/9f653a7b-251e-4eb2-92cd-74e23ac4dba5-kube-api-access-52bmh\") pod \"watcher-applier-0\" (UID: \"9f653a7b-251e-4eb2-92cd-74e23ac4dba5\") " pod="openstack/watcher-applier-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.779524 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f6c808e-d523-48bd-8ec2-28b625834317-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"3f6c808e-d523-48bd-8ec2-28b625834317\") " pod="openstack/watcher-decision-engine-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.779571 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f6c808e-d523-48bd-8ec2-28b625834317-config-data\") pod \"watcher-decision-engine-0\" (UID: \"3f6c808e-d523-48bd-8ec2-28b625834317\") " pod="openstack/watcher-decision-engine-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.779625 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f653a7b-251e-4eb2-92cd-74e23ac4dba5-config-data\") pod \"watcher-applier-0\" (UID: \"9f653a7b-251e-4eb2-92cd-74e23ac4dba5\") " pod="openstack/watcher-applier-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.780106 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-758cdcd5c9-9m42d" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.821157 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-z9mtx"] Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.822441 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-z9mtx" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.832879 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.833180 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-c2wgv" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.833346 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.846616 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-z9mtx"] Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.884614 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/468631ad-821b-469e-a166-1d32d370e5fa-config-data\") pod \"watcher-api-0\" (UID: \"468631ad-821b-469e-a166-1d32d370e5fa\") " pod="openstack/watcher-api-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.884688 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnvh7\" (UniqueName: \"kubernetes.io/projected/468631ad-821b-469e-a166-1d32d370e5fa-kube-api-access-jnvh7\") pod \"watcher-api-0\" (UID: \"468631ad-821b-469e-a166-1d32d370e5fa\") " pod="openstack/watcher-api-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.884749 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f6c808e-d523-48bd-8ec2-28b625834317-config-data\") pod \"watcher-decision-engine-0\" (UID: \"3f6c808e-d523-48bd-8ec2-28b625834317\") " pod="openstack/watcher-decision-engine-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.884809 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468631ad-821b-469e-a166-1d32d370e5fa-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"468631ad-821b-469e-a166-1d32d370e5fa\") " pod="openstack/watcher-api-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.884845 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/468631ad-821b-469e-a166-1d32d370e5fa-logs\") pod \"watcher-api-0\" (UID: \"468631ad-821b-469e-a166-1d32d370e5fa\") " pod="openstack/watcher-api-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.884874 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f653a7b-251e-4eb2-92cd-74e23ac4dba5-config-data\") pod \"watcher-applier-0\" (UID: \"9f653a7b-251e-4eb2-92cd-74e23ac4dba5\") " pod="openstack/watcher-applier-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.884905 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f653a7b-251e-4eb2-92cd-74e23ac4dba5-logs\") pod \"watcher-applier-0\" (UID: \"9f653a7b-251e-4eb2-92cd-74e23ac4dba5\") " pod="openstack/watcher-applier-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.884930 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z2tj\" (UniqueName: \"kubernetes.io/projected/3f6c808e-d523-48bd-8ec2-28b625834317-kube-api-access-2z2tj\") pod \"watcher-decision-engine-0\" (UID: \"3f6c808e-d523-48bd-8ec2-28b625834317\") " pod="openstack/watcher-decision-engine-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.884956 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f653a7b-251e-4eb2-92cd-74e23ac4dba5-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"9f653a7b-251e-4eb2-92cd-74e23ac4dba5\") " pod="openstack/watcher-applier-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.884986 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/468631ad-821b-469e-a166-1d32d370e5fa-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"468631ad-821b-469e-a166-1d32d370e5fa\") " pod="openstack/watcher-api-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.885026 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3f6c808e-d523-48bd-8ec2-28b625834317-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"3f6c808e-d523-48bd-8ec2-28b625834317\") " pod="openstack/watcher-decision-engine-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.885061 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f6c808e-d523-48bd-8ec2-28b625834317-logs\") pod \"watcher-decision-engine-0\" (UID: \"3f6c808e-d523-48bd-8ec2-28b625834317\") " pod="openstack/watcher-decision-engine-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.885093 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52bmh\" (UniqueName: \"kubernetes.io/projected/9f653a7b-251e-4eb2-92cd-74e23ac4dba5-kube-api-access-52bmh\") pod \"watcher-applier-0\" (UID: \"9f653a7b-251e-4eb2-92cd-74e23ac4dba5\") " pod="openstack/watcher-applier-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.885163 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f6c808e-d523-48bd-8ec2-28b625834317-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"3f6c808e-d523-48bd-8ec2-28b625834317\") " pod="openstack/watcher-decision-engine-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.886560 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f653a7b-251e-4eb2-92cd-74e23ac4dba5-logs\") pod \"watcher-applier-0\" (UID: \"9f653a7b-251e-4eb2-92cd-74e23ac4dba5\") " pod="openstack/watcher-applier-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.889218 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f6c808e-d523-48bd-8ec2-28b625834317-logs\") pod \"watcher-decision-engine-0\" (UID: \"3f6c808e-d523-48bd-8ec2-28b625834317\") " pod="openstack/watcher-decision-engine-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.906911 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f653a7b-251e-4eb2-92cd-74e23ac4dba5-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"9f653a7b-251e-4eb2-92cd-74e23ac4dba5\") " pod="openstack/watcher-applier-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.908462 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f6c808e-d523-48bd-8ec2-28b625834317-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"3f6c808e-d523-48bd-8ec2-28b625834317\") " pod="openstack/watcher-decision-engine-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.914121 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3f6c808e-d523-48bd-8ec2-28b625834317-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"3f6c808e-d523-48bd-8ec2-28b625834317\") " pod="openstack/watcher-decision-engine-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.914951 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f653a7b-251e-4eb2-92cd-74e23ac4dba5-config-data\") pod \"watcher-applier-0\" (UID: \"9f653a7b-251e-4eb2-92cd-74e23ac4dba5\") " pod="openstack/watcher-applier-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.919271 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z2tj\" (UniqueName: \"kubernetes.io/projected/3f6c808e-d523-48bd-8ec2-28b625834317-kube-api-access-2z2tj\") pod \"watcher-decision-engine-0\" (UID: \"3f6c808e-d523-48bd-8ec2-28b625834317\") " pod="openstack/watcher-decision-engine-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.920712 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f6c808e-d523-48bd-8ec2-28b625834317-config-data\") pod \"watcher-decision-engine-0\" (UID: \"3f6c808e-d523-48bd-8ec2-28b625834317\") " pod="openstack/watcher-decision-engine-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.932233 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52bmh\" (UniqueName: \"kubernetes.io/projected/9f653a7b-251e-4eb2-92cd-74e23ac4dba5-kube-api-access-52bmh\") pod \"watcher-applier-0\" (UID: \"9f653a7b-251e-4eb2-92cd-74e23ac4dba5\") " pod="openstack/watcher-applier-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.954338 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.963467 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.965505 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.973580 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.973655 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.988153 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.989161 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7sfc\" (UniqueName: \"kubernetes.io/projected/09f27249-61fb-4e13-9eb9-9b804f256d81-kube-api-access-t7sfc\") pod \"neutron-db-sync-z9mtx\" (UID: \"09f27249-61fb-4e13-9eb9-9b804f256d81\") " pod="openstack/neutron-db-sync-z9mtx" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.989201 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/468631ad-821b-469e-a166-1d32d370e5fa-config-data\") pod \"watcher-api-0\" (UID: \"468631ad-821b-469e-a166-1d32d370e5fa\") " pod="openstack/watcher-api-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.989229 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnvh7\" (UniqueName: \"kubernetes.io/projected/468631ad-821b-469e-a166-1d32d370e5fa-kube-api-access-jnvh7\") pod \"watcher-api-0\" (UID: \"468631ad-821b-469e-a166-1d32d370e5fa\") " pod="openstack/watcher-api-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.989275 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/09f27249-61fb-4e13-9eb9-9b804f256d81-config\") pod \"neutron-db-sync-z9mtx\" (UID: \"09f27249-61fb-4e13-9eb9-9b804f256d81\") " pod="openstack/neutron-db-sync-z9mtx" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.989316 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468631ad-821b-469e-a166-1d32d370e5fa-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"468631ad-821b-469e-a166-1d32d370e5fa\") " pod="openstack/watcher-api-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.989339 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/468631ad-821b-469e-a166-1d32d370e5fa-logs\") pod \"watcher-api-0\" (UID: \"468631ad-821b-469e-a166-1d32d370e5fa\") " pod="openstack/watcher-api-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.989371 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/468631ad-821b-469e-a166-1d32d370e5fa-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"468631ad-821b-469e-a166-1d32d370e5fa\") " pod="openstack/watcher-api-0" Mar 20 15:59:59 crc kubenswrapper[4730]: I0320 15:59:59.989400 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09f27249-61fb-4e13-9eb9-9b804f256d81-combined-ca-bundle\") pod \"neutron-db-sync-z9mtx\" (UID: \"09f27249-61fb-4e13-9eb9-9b804f256d81\") " pod="openstack/neutron-db-sync-z9mtx" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.000493 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468631ad-821b-469e-a166-1d32d370e5fa-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"468631ad-821b-469e-a166-1d32d370e5fa\") " pod="openstack/watcher-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.000756 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/468631ad-821b-469e-a166-1d32d370e5fa-logs\") pod \"watcher-api-0\" (UID: \"468631ad-821b-469e-a166-1d32d370e5fa\") " pod="openstack/watcher-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.003760 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.004652 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/468631ad-821b-469e-a166-1d32d370e5fa-config-data\") pod \"watcher-api-0\" (UID: \"468631ad-821b-469e-a166-1d32d370e5fa\") " pod="openstack/watcher-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.007383 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-hbplf"] Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.008761 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hbplf" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.018892 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/468631ad-821b-469e-a166-1d32d370e5fa-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"468631ad-821b-469e-a166-1d32d370e5fa\") " pod="openstack/watcher-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.019346 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.019539 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bg7s8" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.019623 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.034753 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnvh7\" (UniqueName: \"kubernetes.io/projected/468631ad-821b-469e-a166-1d32d370e5fa-kube-api-access-jnvh7\") pod \"watcher-api-0\" (UID: \"468631ad-821b-469e-a166-1d32d370e5fa\") " pod="openstack/watcher-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.060557 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-hbplf"] Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.082066 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-tz6x7"] Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.083108 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-tz6x7" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.084911 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-wcvgq" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.086406 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.091423 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/223c97f9-0680-47b8-bc2e-1c914296d29e-run-httpd\") pod \"ceilometer-0\" (UID: \"223c97f9-0680-47b8-bc2e-1c914296d29e\") " pod="openstack/ceilometer-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.091461 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/223c97f9-0680-47b8-bc2e-1c914296d29e-scripts\") pod \"ceilometer-0\" (UID: \"223c97f9-0680-47b8-bc2e-1c914296d29e\") " pod="openstack/ceilometer-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.091490 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/223c97f9-0680-47b8-bc2e-1c914296d29e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"223c97f9-0680-47b8-bc2e-1c914296d29e\") " pod="openstack/ceilometer-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.091523 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-scripts\") pod \"cinder-db-sync-hbplf\" (UID: \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\") " pod="openstack/cinder-db-sync-hbplf" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.091552 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/223c97f9-0680-47b8-bc2e-1c914296d29e-log-httpd\") pod \"ceilometer-0\" (UID: \"223c97f9-0680-47b8-bc2e-1c914296d29e\") " pod="openstack/ceilometer-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.091587 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24qfr\" (UniqueName: \"kubernetes.io/projected/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-kube-api-access-24qfr\") pod \"cinder-db-sync-hbplf\" (UID: \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\") " pod="openstack/cinder-db-sync-hbplf" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.091604 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-db-sync-config-data\") pod \"cinder-db-sync-hbplf\" (UID: \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\") " pod="openstack/cinder-db-sync-hbplf" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.091625 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/223c97f9-0680-47b8-bc2e-1c914296d29e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"223c97f9-0680-47b8-bc2e-1c914296d29e\") " pod="openstack/ceilometer-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.091646 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09f27249-61fb-4e13-9eb9-9b804f256d81-combined-ca-bundle\") pod \"neutron-db-sync-z9mtx\" (UID: \"09f27249-61fb-4e13-9eb9-9b804f256d81\") " pod="openstack/neutron-db-sync-z9mtx" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.091671 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-combined-ca-bundle\") pod \"cinder-db-sync-hbplf\" (UID: \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\") " pod="openstack/cinder-db-sync-hbplf" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.091689 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-etc-machine-id\") pod \"cinder-db-sync-hbplf\" (UID: \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\") " pod="openstack/cinder-db-sync-hbplf" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.091711 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7sfc\" (UniqueName: \"kubernetes.io/projected/09f27249-61fb-4e13-9eb9-9b804f256d81-kube-api-access-t7sfc\") pod \"neutron-db-sync-z9mtx\" (UID: \"09f27249-61fb-4e13-9eb9-9b804f256d81\") " pod="openstack/neutron-db-sync-z9mtx" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.091749 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-config-data\") pod \"cinder-db-sync-hbplf\" (UID: \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\") " pod="openstack/cinder-db-sync-hbplf" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.091771 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/223c97f9-0680-47b8-bc2e-1c914296d29e-config-data\") pod \"ceilometer-0\" (UID: \"223c97f9-0680-47b8-bc2e-1c914296d29e\") " pod="openstack/ceilometer-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.091787 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np4pc\" (UniqueName: \"kubernetes.io/projected/223c97f9-0680-47b8-bc2e-1c914296d29e-kube-api-access-np4pc\") pod \"ceilometer-0\" (UID: \"223c97f9-0680-47b8-bc2e-1c914296d29e\") " pod="openstack/ceilometer-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.091811 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/09f27249-61fb-4e13-9eb9-9b804f256d81-config\") pod \"neutron-db-sync-z9mtx\" (UID: \"09f27249-61fb-4e13-9eb9-9b804f256d81\") " pod="openstack/neutron-db-sync-z9mtx" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.094664 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-tz6x7"] Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.097820 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/09f27249-61fb-4e13-9eb9-9b804f256d81-config\") pod \"neutron-db-sync-z9mtx\" (UID: \"09f27249-61fb-4e13-9eb9-9b804f256d81\") " pod="openstack/neutron-db-sync-z9mtx" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.101116 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09f27249-61fb-4e13-9eb9-9b804f256d81-combined-ca-bundle\") pod \"neutron-db-sync-z9mtx\" (UID: \"09f27249-61fb-4e13-9eb9-9b804f256d81\") " pod="openstack/neutron-db-sync-z9mtx" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.113268 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-758cdcd5c9-9m42d"] Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.146215 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7sfc\" (UniqueName: \"kubernetes.io/projected/09f27249-61fb-4e13-9eb9-9b804f256d81-kube-api-access-t7sfc\") pod \"neutron-db-sync-z9mtx\" (UID: \"09f27249-61fb-4e13-9eb9-9b804f256d81\") " pod="openstack/neutron-db-sync-z9mtx" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.162978 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-x2t9r"] Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.164123 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-x2t9r" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.166486 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-wvjmt" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.166963 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.167136 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.185676 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77878fc4cf-8s7hs"] Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.187044 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.195673 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48fc8af0-e30f-4f3f-88d3-8b054c6359ef-combined-ca-bundle\") pod \"barbican-db-sync-tz6x7\" (UID: \"48fc8af0-e30f-4f3f-88d3-8b054c6359ef\") " pod="openstack/barbican-db-sync-tz6x7" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.195738 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-config-data\") pod \"cinder-db-sync-hbplf\" (UID: \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\") " pod="openstack/cinder-db-sync-hbplf" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.195772 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/223c97f9-0680-47b8-bc2e-1c914296d29e-config-data\") pod \"ceilometer-0\" (UID: \"223c97f9-0680-47b8-bc2e-1c914296d29e\") " pod="openstack/ceilometer-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.195790 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np4pc\" (UniqueName: \"kubernetes.io/projected/223c97f9-0680-47b8-bc2e-1c914296d29e-kube-api-access-np4pc\") pod \"ceilometer-0\" (UID: \"223c97f9-0680-47b8-bc2e-1c914296d29e\") " pod="openstack/ceilometer-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.195943 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/223c97f9-0680-47b8-bc2e-1c914296d29e-run-httpd\") pod \"ceilometer-0\" (UID: \"223c97f9-0680-47b8-bc2e-1c914296d29e\") " pod="openstack/ceilometer-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.195981 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/223c97f9-0680-47b8-bc2e-1c914296d29e-scripts\") pod \"ceilometer-0\" (UID: \"223c97f9-0680-47b8-bc2e-1c914296d29e\") " pod="openstack/ceilometer-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.196002 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bggsr\" (UniqueName: \"kubernetes.io/projected/48fc8af0-e30f-4f3f-88d3-8b054c6359ef-kube-api-access-bggsr\") pod \"barbican-db-sync-tz6x7\" (UID: \"48fc8af0-e30f-4f3f-88d3-8b054c6359ef\") " pod="openstack/barbican-db-sync-tz6x7" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.196028 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/223c97f9-0680-47b8-bc2e-1c914296d29e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"223c97f9-0680-47b8-bc2e-1c914296d29e\") " pod="openstack/ceilometer-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.196770 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-scripts\") pod \"cinder-db-sync-hbplf\" (UID: \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\") " pod="openstack/cinder-db-sync-hbplf" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.196822 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/223c97f9-0680-47b8-bc2e-1c914296d29e-log-httpd\") pod \"ceilometer-0\" (UID: \"223c97f9-0680-47b8-bc2e-1c914296d29e\") " pod="openstack/ceilometer-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.197114 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24qfr\" (UniqueName: \"kubernetes.io/projected/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-kube-api-access-24qfr\") pod \"cinder-db-sync-hbplf\" (UID: \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\") " pod="openstack/cinder-db-sync-hbplf" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.197139 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-db-sync-config-data\") pod \"cinder-db-sync-hbplf\" (UID: \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\") " pod="openstack/cinder-db-sync-hbplf" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.197169 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/223c97f9-0680-47b8-bc2e-1c914296d29e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"223c97f9-0680-47b8-bc2e-1c914296d29e\") " pod="openstack/ceilometer-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.197212 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-combined-ca-bundle\") pod \"cinder-db-sync-hbplf\" (UID: \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\") " pod="openstack/cinder-db-sync-hbplf" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.197231 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/48fc8af0-e30f-4f3f-88d3-8b054c6359ef-db-sync-config-data\") pod \"barbican-db-sync-tz6x7\" (UID: \"48fc8af0-e30f-4f3f-88d3-8b054c6359ef\") " pod="openstack/barbican-db-sync-tz6x7" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.197265 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-etc-machine-id\") pod \"cinder-db-sync-hbplf\" (UID: \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\") " pod="openstack/cinder-db-sync-hbplf" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.197370 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-etc-machine-id\") pod \"cinder-db-sync-hbplf\" (UID: \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\") " pod="openstack/cinder-db-sync-hbplf" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.197704 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/223c97f9-0680-47b8-bc2e-1c914296d29e-run-httpd\") pod \"ceilometer-0\" (UID: \"223c97f9-0680-47b8-bc2e-1c914296d29e\") " pod="openstack/ceilometer-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.200499 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/223c97f9-0680-47b8-bc2e-1c914296d29e-scripts\") pod \"ceilometer-0\" (UID: \"223c97f9-0680-47b8-bc2e-1c914296d29e\") " pod="openstack/ceilometer-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.203969 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/223c97f9-0680-47b8-bc2e-1c914296d29e-log-httpd\") pod \"ceilometer-0\" (UID: \"223c97f9-0680-47b8-bc2e-1c914296d29e\") " pod="openstack/ceilometer-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.206050 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/223c97f9-0680-47b8-bc2e-1c914296d29e-config-data\") pod \"ceilometer-0\" (UID: \"223c97f9-0680-47b8-bc2e-1c914296d29e\") " pod="openstack/ceilometer-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.206871 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-config-data\") pod \"cinder-db-sync-hbplf\" (UID: \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\") " pod="openstack/cinder-db-sync-hbplf" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.207603 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-scripts\") pod \"cinder-db-sync-hbplf\" (UID: \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\") " pod="openstack/cinder-db-sync-hbplf" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.208124 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-combined-ca-bundle\") pod \"cinder-db-sync-hbplf\" (UID: \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\") " pod="openstack/cinder-db-sync-hbplf" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.208485 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-db-sync-config-data\") pod \"cinder-db-sync-hbplf\" (UID: \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\") " pod="openstack/cinder-db-sync-hbplf" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.211413 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-x2t9r"] Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.221985 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/223c97f9-0680-47b8-bc2e-1c914296d29e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"223c97f9-0680-47b8-bc2e-1c914296d29e\") " pod="openstack/ceilometer-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.224560 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77878fc4cf-8s7hs"] Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.227232 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np4pc\" (UniqueName: \"kubernetes.io/projected/223c97f9-0680-47b8-bc2e-1c914296d29e-kube-api-access-np4pc\") pod \"ceilometer-0\" (UID: \"223c97f9-0680-47b8-bc2e-1c914296d29e\") " pod="openstack/ceilometer-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.227558 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/223c97f9-0680-47b8-bc2e-1c914296d29e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"223c97f9-0680-47b8-bc2e-1c914296d29e\") " pod="openstack/ceilometer-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.231020 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24qfr\" (UniqueName: \"kubernetes.io/projected/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-kube-api-access-24qfr\") pod \"cinder-db-sync-hbplf\" (UID: \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\") " pod="openstack/cinder-db-sync-hbplf" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.270884 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.307437 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567040-2zl4f"] Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.310092 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bggsr\" (UniqueName: \"kubernetes.io/projected/48fc8af0-e30f-4f3f-88d3-8b054c6359ef-kube-api-access-bggsr\") pod \"barbican-db-sync-tz6x7\" (UID: \"48fc8af0-e30f-4f3f-88d3-8b054c6359ef\") " pod="openstack/barbican-db-sync-tz6x7" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.310236 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-config-data\") pod \"placement-db-sync-x2t9r\" (UID: \"a05675d7-cd2f-4810-862b-cb0d2d13cbdd\") " pod="openstack/placement-db-sync-x2t9r" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.310331 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-ovsdbserver-sb\") pod \"dnsmasq-dns-77878fc4cf-8s7hs\" (UID: \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\") " pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.310399 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-combined-ca-bundle\") pod \"placement-db-sync-x2t9r\" (UID: \"a05675d7-cd2f-4810-862b-cb0d2d13cbdd\") " pod="openstack/placement-db-sync-x2t9r" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.310479 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-scripts\") pod \"placement-db-sync-x2t9r\" (UID: \"a05675d7-cd2f-4810-862b-cb0d2d13cbdd\") " pod="openstack/placement-db-sync-x2t9r" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.310516 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/48fc8af0-e30f-4f3f-88d3-8b054c6359ef-db-sync-config-data\") pod \"barbican-db-sync-tz6x7\" (UID: \"48fc8af0-e30f-4f3f-88d3-8b054c6359ef\") " pod="openstack/barbican-db-sync-tz6x7" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.310585 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5dq8\" (UniqueName: \"kubernetes.io/projected/82ffcdbb-cebb-443a-a8af-3c3543bea13d-kube-api-access-m5dq8\") pod \"dnsmasq-dns-77878fc4cf-8s7hs\" (UID: \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\") " pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.310627 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-config\") pod \"dnsmasq-dns-77878fc4cf-8s7hs\" (UID: \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\") " pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.310661 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-dns-svc\") pod \"dnsmasq-dns-77878fc4cf-8s7hs\" (UID: \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\") " pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.310691 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-dns-swift-storage-0\") pod \"dnsmasq-dns-77878fc4cf-8s7hs\" (UID: \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\") " pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.310743 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48fc8af0-e30f-4f3f-88d3-8b054c6359ef-combined-ca-bundle\") pod \"barbican-db-sync-tz6x7\" (UID: \"48fc8af0-e30f-4f3f-88d3-8b054c6359ef\") " pod="openstack/barbican-db-sync-tz6x7" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.310808 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-ovsdbserver-nb\") pod \"dnsmasq-dns-77878fc4cf-8s7hs\" (UID: \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\") " pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.310848 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-logs\") pod \"placement-db-sync-x2t9r\" (UID: \"a05675d7-cd2f-4810-862b-cb0d2d13cbdd\") " pod="openstack/placement-db-sync-x2t9r" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.310898 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxgg6\" (UniqueName: \"kubernetes.io/projected/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-kube-api-access-xxgg6\") pod \"placement-db-sync-x2t9r\" (UID: \"a05675d7-cd2f-4810-862b-cb0d2d13cbdd\") " pod="openstack/placement-db-sync-x2t9r" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.316012 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567040-2zl4f" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.317767 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48fc8af0-e30f-4f3f-88d3-8b054c6359ef-combined-ca-bundle\") pod \"barbican-db-sync-tz6x7\" (UID: \"48fc8af0-e30f-4f3f-88d3-8b054c6359ef\") " pod="openstack/barbican-db-sync-tz6x7" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.323595 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.323840 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.324028 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.329969 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567040-cz69b"] Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.329974 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/48fc8af0-e30f-4f3f-88d3-8b054c6359ef-db-sync-config-data\") pod \"barbican-db-sync-tz6x7\" (UID: \"48fc8af0-e30f-4f3f-88d3-8b054c6359ef\") " pod="openstack/barbican-db-sync-tz6x7" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.333080 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567040-cz69b" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.337876 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.338169 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.338645 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-z9mtx" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.339103 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bggsr\" (UniqueName: \"kubernetes.io/projected/48fc8af0-e30f-4f3f-88d3-8b054c6359ef-kube-api-access-bggsr\") pod \"barbican-db-sync-tz6x7\" (UID: \"48fc8af0-e30f-4f3f-88d3-8b054c6359ef\") " pod="openstack/barbican-db-sync-tz6x7" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.355132 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567040-2zl4f"] Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.359756 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.386647 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567040-cz69b"] Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.412161 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hbplf" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.412678 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5dq8\" (UniqueName: \"kubernetes.io/projected/82ffcdbb-cebb-443a-a8af-3c3543bea13d-kube-api-access-m5dq8\") pod \"dnsmasq-dns-77878fc4cf-8s7hs\" (UID: \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\") " pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.412994 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-config\") pod \"dnsmasq-dns-77878fc4cf-8s7hs\" (UID: \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\") " pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.413018 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-dns-svc\") pod \"dnsmasq-dns-77878fc4cf-8s7hs\" (UID: \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\") " pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.413778 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-config\") pod \"dnsmasq-dns-77878fc4cf-8s7hs\" (UID: \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\") " pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.413842 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-dns-swift-storage-0\") pod \"dnsmasq-dns-77878fc4cf-8s7hs\" (UID: \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\") " pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.413983 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-dns-svc\") pod \"dnsmasq-dns-77878fc4cf-8s7hs\" (UID: \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\") " pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.414901 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-dns-swift-storage-0\") pod \"dnsmasq-dns-77878fc4cf-8s7hs\" (UID: \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\") " pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.414986 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-ovsdbserver-nb\") pod \"dnsmasq-dns-77878fc4cf-8s7hs\" (UID: \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\") " pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.415190 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-logs\") pod \"placement-db-sync-x2t9r\" (UID: \"a05675d7-cd2f-4810-862b-cb0d2d13cbdd\") " pod="openstack/placement-db-sync-x2t9r" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.415321 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxgg6\" (UniqueName: \"kubernetes.io/projected/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-kube-api-access-xxgg6\") pod \"placement-db-sync-x2t9r\" (UID: \"a05675d7-cd2f-4810-862b-cb0d2d13cbdd\") " pod="openstack/placement-db-sync-x2t9r" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.415375 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/672cfda1-2ec8-41fe-b3dc-eabe4e60726d-config-volume\") pod \"collect-profiles-29567040-cz69b\" (UID: \"672cfda1-2ec8-41fe-b3dc-eabe4e60726d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567040-cz69b" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.415510 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-config-data\") pod \"placement-db-sync-x2t9r\" (UID: \"a05675d7-cd2f-4810-862b-cb0d2d13cbdd\") " pod="openstack/placement-db-sync-x2t9r" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.415669 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-ovsdbserver-sb\") pod \"dnsmasq-dns-77878fc4cf-8s7hs\" (UID: \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\") " pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.415715 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxpm7\" (UniqueName: \"kubernetes.io/projected/672cfda1-2ec8-41fe-b3dc-eabe4e60726d-kube-api-access-wxpm7\") pod \"collect-profiles-29567040-cz69b\" (UID: \"672cfda1-2ec8-41fe-b3dc-eabe4e60726d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567040-cz69b" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.415786 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-combined-ca-bundle\") pod \"placement-db-sync-x2t9r\" (UID: \"a05675d7-cd2f-4810-862b-cb0d2d13cbdd\") " pod="openstack/placement-db-sync-x2t9r" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.415922 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-459cr\" (UniqueName: \"kubernetes.io/projected/97b63a10-b572-4a37-a2a4-079852aa2d3d-kube-api-access-459cr\") pod \"auto-csr-approver-29567040-2zl4f\" (UID: \"97b63a10-b572-4a37-a2a4-079852aa2d3d\") " pod="openshift-infra/auto-csr-approver-29567040-2zl4f" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.415992 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-scripts\") pod \"placement-db-sync-x2t9r\" (UID: \"a05675d7-cd2f-4810-862b-cb0d2d13cbdd\") " pod="openstack/placement-db-sync-x2t9r" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.416041 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/672cfda1-2ec8-41fe-b3dc-eabe4e60726d-secret-volume\") pod \"collect-profiles-29567040-cz69b\" (UID: \"672cfda1-2ec8-41fe-b3dc-eabe4e60726d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567040-cz69b" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.417217 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-ovsdbserver-nb\") pod \"dnsmasq-dns-77878fc4cf-8s7hs\" (UID: \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\") " pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.417722 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-ovsdbserver-sb\") pod \"dnsmasq-dns-77878fc4cf-8s7hs\" (UID: \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\") " pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.420077 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-config-data\") pod \"placement-db-sync-x2t9r\" (UID: \"a05675d7-cd2f-4810-862b-cb0d2d13cbdd\") " pod="openstack/placement-db-sync-x2t9r" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.420433 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-logs\") pod \"placement-db-sync-x2t9r\" (UID: \"a05675d7-cd2f-4810-862b-cb0d2d13cbdd\") " pod="openstack/placement-db-sync-x2t9r" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.420575 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-combined-ca-bundle\") pod \"placement-db-sync-x2t9r\" (UID: \"a05675d7-cd2f-4810-862b-cb0d2d13cbdd\") " pod="openstack/placement-db-sync-x2t9r" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.426599 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-scripts\") pod \"placement-db-sync-x2t9r\" (UID: \"a05675d7-cd2f-4810-862b-cb0d2d13cbdd\") " pod="openstack/placement-db-sync-x2t9r" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.431116 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5dq8\" (UniqueName: \"kubernetes.io/projected/82ffcdbb-cebb-443a-a8af-3c3543bea13d-kube-api-access-m5dq8\") pod \"dnsmasq-dns-77878fc4cf-8s7hs\" (UID: \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\") " pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.438192 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-tz6x7" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.441148 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxgg6\" (UniqueName: \"kubernetes.io/projected/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-kube-api-access-xxgg6\") pod \"placement-db-sync-x2t9r\" (UID: \"a05675d7-cd2f-4810-862b-cb0d2d13cbdd\") " pod="openstack/placement-db-sync-x2t9r" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.482053 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rmtsq"] Mar 20 16:00:00 crc kubenswrapper[4730]: W0320 16:00:00.494319 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3349c2c_6f29_425d_9d25_b4f23821cfcc.slice/crio-707ec669900453b1a2293614ef0436d415043c661d842387b86130234173b70a WatchSource:0}: Error finding container 707ec669900453b1a2293614ef0436d415043c661d842387b86130234173b70a: Status 404 returned error can't find the container with id 707ec669900453b1a2293614ef0436d415043c661d842387b86130234173b70a Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.501262 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-x2t9r" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.517833 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/672cfda1-2ec8-41fe-b3dc-eabe4e60726d-config-volume\") pod \"collect-profiles-29567040-cz69b\" (UID: \"672cfda1-2ec8-41fe-b3dc-eabe4e60726d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567040-cz69b" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.517998 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxpm7\" (UniqueName: \"kubernetes.io/projected/672cfda1-2ec8-41fe-b3dc-eabe4e60726d-kube-api-access-wxpm7\") pod \"collect-profiles-29567040-cz69b\" (UID: \"672cfda1-2ec8-41fe-b3dc-eabe4e60726d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567040-cz69b" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.518041 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-459cr\" (UniqueName: \"kubernetes.io/projected/97b63a10-b572-4a37-a2a4-079852aa2d3d-kube-api-access-459cr\") pod \"auto-csr-approver-29567040-2zl4f\" (UID: \"97b63a10-b572-4a37-a2a4-079852aa2d3d\") " pod="openshift-infra/auto-csr-approver-29567040-2zl4f" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.518089 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/672cfda1-2ec8-41fe-b3dc-eabe4e60726d-secret-volume\") pod \"collect-profiles-29567040-cz69b\" (UID: \"672cfda1-2ec8-41fe-b3dc-eabe4e60726d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567040-cz69b" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.520033 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/672cfda1-2ec8-41fe-b3dc-eabe4e60726d-config-volume\") pod \"collect-profiles-29567040-cz69b\" (UID: \"672cfda1-2ec8-41fe-b3dc-eabe4e60726d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567040-cz69b" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.525747 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/672cfda1-2ec8-41fe-b3dc-eabe4e60726d-secret-volume\") pod \"collect-profiles-29567040-cz69b\" (UID: \"672cfda1-2ec8-41fe-b3dc-eabe4e60726d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567040-cz69b" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.538101 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.538208 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxpm7\" (UniqueName: \"kubernetes.io/projected/672cfda1-2ec8-41fe-b3dc-eabe4e60726d-kube-api-access-wxpm7\") pod \"collect-profiles-29567040-cz69b\" (UID: \"672cfda1-2ec8-41fe-b3dc-eabe4e60726d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567040-cz69b" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.556637 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-459cr\" (UniqueName: \"kubernetes.io/projected/97b63a10-b572-4a37-a2a4-079852aa2d3d-kube-api-access-459cr\") pod \"auto-csr-approver-29567040-2zl4f\" (UID: \"97b63a10-b572-4a37-a2a4-079852aa2d3d\") " pod="openshift-infra/auto-csr-approver-29567040-2zl4f" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.618305 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.620064 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.634518 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-758cdcd5c9-9m42d"] Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.640898 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-b8g88" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.641055 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.641163 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.641430 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.647042 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.679944 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567040-2zl4f" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.686870 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567040-cz69b" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.713569 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.733730 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4458311-d050-4887-b4c7-6df6c993d66e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.733826 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.733861 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4458311-d050-4887-b4c7-6df6c993d66e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.733904 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgxjd\" (UniqueName: \"kubernetes.io/projected/c4458311-d050-4887-b4c7-6df6c993d66e-kube-api-access-jgxjd\") pod \"glance-default-internal-api-0\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.733960 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4458311-d050-4887-b4c7-6df6c993d66e-logs\") pod \"glance-default-internal-api-0\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.733988 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4458311-d050-4887-b4c7-6df6c993d66e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.734107 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4458311-d050-4887-b4c7-6df6c993d66e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.734141 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4458311-d050-4887-b4c7-6df6c993d66e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.760847 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.771876 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.775643 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.780575 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.780912 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.799211 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.835349 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b821c271-d46c-4a68-a6ea-438e616c4d47-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.835390 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b821c271-d46c-4a68-a6ea-438e616c4d47-scripts\") pod \"glance-default-external-api-0\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.835434 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b821c271-d46c-4a68-a6ea-438e616c4d47-config-data\") pod \"glance-default-external-api-0\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.835454 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4t8c\" (UniqueName: \"kubernetes.io/projected/b821c271-d46c-4a68-a6ea-438e616c4d47-kube-api-access-g4t8c\") pod \"glance-default-external-api-0\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.835479 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b821c271-d46c-4a68-a6ea-438e616c4d47-logs\") pod \"glance-default-external-api-0\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.835506 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4458311-d050-4887-b4c7-6df6c993d66e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.835525 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4458311-d050-4887-b4c7-6df6c993d66e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.835552 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b821c271-d46c-4a68-a6ea-438e616c4d47-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.835570 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b821c271-d46c-4a68-a6ea-438e616c4d47-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.835604 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4458311-d050-4887-b4c7-6df6c993d66e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.835628 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.835647 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4458311-d050-4887-b4c7-6df6c993d66e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.835897 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgxjd\" (UniqueName: \"kubernetes.io/projected/c4458311-d050-4887-b4c7-6df6c993d66e-kube-api-access-jgxjd\") pod \"glance-default-internal-api-0\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.835936 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4458311-d050-4887-b4c7-6df6c993d66e-logs\") pod \"glance-default-internal-api-0\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.835965 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4458311-d050-4887-b4c7-6df6c993d66e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.835983 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.843491 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.848013 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4458311-d050-4887-b4c7-6df6c993d66e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.848360 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4458311-d050-4887-b4c7-6df6c993d66e-logs\") pod \"glance-default-internal-api-0\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.857220 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4458311-d050-4887-b4c7-6df6c993d66e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.865290 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4458311-d050-4887-b4c7-6df6c993d66e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.870296 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4458311-d050-4887-b4c7-6df6c993d66e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.871535 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4458311-d050-4887-b4c7-6df6c993d66e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.876324 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgxjd\" (UniqueName: \"kubernetes.io/projected/c4458311-d050-4887-b4c7-6df6c993d66e-kube-api-access-jgxjd\") pod \"glance-default-internal-api-0\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.907665 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.914553 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: W0320 16:00:00.940319 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod468631ad_821b_469e_a166_1d32d370e5fa.slice/crio-61de70dac9ca369e87e9de4023e0b6d3ab234abdc4d63ca022c46c3f926b57cd WatchSource:0}: Error finding container 61de70dac9ca369e87e9de4023e0b6d3ab234abdc4d63ca022c46c3f926b57cd: Status 404 returned error can't find the container with id 61de70dac9ca369e87e9de4023e0b6d3ab234abdc4d63ca022c46c3f926b57cd Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.945149 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.945261 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b821c271-d46c-4a68-a6ea-438e616c4d47-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.945304 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b821c271-d46c-4a68-a6ea-438e616c4d47-scripts\") pod \"glance-default-external-api-0\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.945375 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b821c271-d46c-4a68-a6ea-438e616c4d47-config-data\") pod \"glance-default-external-api-0\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.945404 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4t8c\" (UniqueName: \"kubernetes.io/projected/b821c271-d46c-4a68-a6ea-438e616c4d47-kube-api-access-g4t8c\") pod \"glance-default-external-api-0\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.945441 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b821c271-d46c-4a68-a6ea-438e616c4d47-logs\") pod \"glance-default-external-api-0\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.945508 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b821c271-d46c-4a68-a6ea-438e616c4d47-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.945530 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b821c271-d46c-4a68-a6ea-438e616c4d47-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.946164 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b821c271-d46c-4a68-a6ea-438e616c4d47-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.946777 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b821c271-d46c-4a68-a6ea-438e616c4d47-logs\") pod \"glance-default-external-api-0\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.946893 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.961302 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.973326 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b821c271-d46c-4a68-a6ea-438e616c4d47-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.973347 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b821c271-d46c-4a68-a6ea-438e616c4d47-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.973880 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4t8c\" (UniqueName: \"kubernetes.io/projected/b821c271-d46c-4a68-a6ea-438e616c4d47-kube-api-access-g4t8c\") pod \"glance-default-external-api-0\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.976549 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b821c271-d46c-4a68-a6ea-438e616c4d47-scripts\") pod \"glance-default-external-api-0\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:00 crc kubenswrapper[4730]: I0320 16:00:00.994630 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b821c271-d46c-4a68-a6ea-438e616c4d47-config-data\") pod \"glance-default-external-api-0\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:01 crc kubenswrapper[4730]: I0320 16:00:01.013179 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:01 crc kubenswrapper[4730]: I0320 16:00:01.155263 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-z9mtx"] Mar 20 16:00:01 crc kubenswrapper[4730]: I0320 16:00:01.235827 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"9f653a7b-251e-4eb2-92cd-74e23ac4dba5","Type":"ContainerStarted","Data":"c21d7cc71db24a4080b26c4c7e24c638e9de88158fbe9899bd6305ba181030ef"} Mar 20 16:00:01 crc kubenswrapper[4730]: I0320 16:00:01.236984 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rmtsq" event={"ID":"e3349c2c-6f29-425d-9d25-b4f23821cfcc","Type":"ContainerStarted","Data":"707ec669900453b1a2293614ef0436d415043c661d842387b86130234173b70a"} Mar 20 16:00:01 crc kubenswrapper[4730]: I0320 16:00:01.238139 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3f6c808e-d523-48bd-8ec2-28b625834317","Type":"ContainerStarted","Data":"d03f482cee6be89afdaaa20261b8838840560db337ea7e22e3e773497fe55805"} Mar 20 16:00:01 crc kubenswrapper[4730]: I0320 16:00:01.239577 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-z9mtx" event={"ID":"09f27249-61fb-4e13-9eb9-9b804f256d81","Type":"ContainerStarted","Data":"a168269d42bdf34c043574c3c59fe418fbfcc2023ca3015cc5326a7e3f76f715"} Mar 20 16:00:01 crc kubenswrapper[4730]: I0320 16:00:01.245429 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-758cdcd5c9-9m42d" event={"ID":"c857c34a-0efc-4ebe-8f42-e562e88de7a4","Type":"ContainerStarted","Data":"4c72e558e23a7c4c418bb2168aa9ecd0cbbacb82442ab392bdc56b6a65616fb6"} Mar 20 16:00:01 crc kubenswrapper[4730]: I0320 16:00:01.247427 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"468631ad-821b-469e-a166-1d32d370e5fa","Type":"ContainerStarted","Data":"61de70dac9ca369e87e9de4023e0b6d3ab234abdc4d63ca022c46c3f926b57cd"} Mar 20 16:00:01 crc kubenswrapper[4730]: I0320 16:00:01.349524 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 16:00:01 crc kubenswrapper[4730]: I0320 16:00:01.382579 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-tz6x7"] Mar 20 16:00:01 crc kubenswrapper[4730]: I0320 16:00:01.425152 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:00:01 crc kubenswrapper[4730]: I0320 16:00:01.442501 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-x2t9r"] Mar 20 16:00:01 crc kubenswrapper[4730]: W0320 16:00:01.450425 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48fc8af0_e30f_4f3f_88d3_8b054c6359ef.slice/crio-af7beef135dc284222c89d8da5556d80f3f65072f0a1d94b5d3c1cbd5f0ae59a WatchSource:0}: Error finding container af7beef135dc284222c89d8da5556d80f3f65072f0a1d94b5d3c1cbd5f0ae59a: Status 404 returned error can't find the container with id af7beef135dc284222c89d8da5556d80f3f65072f0a1d94b5d3c1cbd5f0ae59a Mar 20 16:00:01 crc kubenswrapper[4730]: I0320 16:00:01.470596 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-hbplf"] Mar 20 16:00:01 crc kubenswrapper[4730]: I0320 16:00:01.596727 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77878fc4cf-8s7hs"] Mar 20 16:00:01 crc kubenswrapper[4730]: I0320 16:00:01.604952 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567040-2zl4f"] Mar 20 16:00:01 crc kubenswrapper[4730]: W0320 16:00:01.615117 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97b63a10_b572_4a37_a2a4_079852aa2d3d.slice/crio-5160551b06a1e1f1e7b6689c4dec94a675729bc0e9edd00f1bb67e7f9a23750a WatchSource:0}: Error finding container 5160551b06a1e1f1e7b6689c4dec94a675729bc0e9edd00f1bb67e7f9a23750a: Status 404 returned error can't find the container with id 5160551b06a1e1f1e7b6689c4dec94a675729bc0e9edd00f1bb67e7f9a23750a Mar 20 16:00:01 crc kubenswrapper[4730]: I0320 16:00:01.755364 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567040-cz69b"] Mar 20 16:00:01 crc kubenswrapper[4730]: I0320 16:00:01.923147 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 16:00:02 crc kubenswrapper[4730]: I0320 16:00:02.111499 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Mar 20 16:00:02 crc kubenswrapper[4730]: I0320 16:00:02.189081 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 16:00:02 crc kubenswrapper[4730]: I0320 16:00:02.316682 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"223c97f9-0680-47b8-bc2e-1c914296d29e","Type":"ContainerStarted","Data":"7f071b5518e739d74a059048c81e33bc96125faedfd609490b9b1dda80135229"} Mar 20 16:00:02 crc kubenswrapper[4730]: I0320 16:00:02.320839 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-x2t9r" event={"ID":"a05675d7-cd2f-4810-862b-cb0d2d13cbdd","Type":"ContainerStarted","Data":"31cc41440870730294a7b216b0c2b45c7a76296191729b22f502e1990a4511bd"} Mar 20 16:00:02 crc kubenswrapper[4730]: I0320 16:00:02.324785 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:00:02 crc kubenswrapper[4730]: I0320 16:00:02.331526 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" event={"ID":"82ffcdbb-cebb-443a-a8af-3c3543bea13d","Type":"ContainerStarted","Data":"7a2cb82ca156020b21392ad78c85cb4ddbdb0874dad3d5fcc0112cae0cde0511"} Mar 20 16:00:02 crc kubenswrapper[4730]: I0320 16:00:02.345764 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hbplf" event={"ID":"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd","Type":"ContainerStarted","Data":"ad1075e305b4a94d7393955deeb754baca56955fca19f6133e616c9e84808c7e"} Mar 20 16:00:02 crc kubenswrapper[4730]: I0320 16:00:02.349500 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4458311-d050-4887-b4c7-6df6c993d66e","Type":"ContainerStarted","Data":"fe390ee094d1d3c5ad2dd1bacd57dfb2a471207a757c9831b79190ce13256f10"} Mar 20 16:00:02 crc kubenswrapper[4730]: I0320 16:00:02.353846 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-758cdcd5c9-9m42d" event={"ID":"c857c34a-0efc-4ebe-8f42-e562e88de7a4","Type":"ContainerStarted","Data":"7c04756ee5298735d97672f2fb6129f8d01b04d3f8a464980ca1051cf42e4065"} Mar 20 16:00:02 crc kubenswrapper[4730]: I0320 16:00:02.380093 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"468631ad-821b-469e-a166-1d32d370e5fa","Type":"ContainerStarted","Data":"f7dc5000437c6d89457d868ca7a318170322b81dfd62eee1c3bc3df85ceba4e5"} Mar 20 16:00:02 crc kubenswrapper[4730]: I0320 16:00:02.382539 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-tz6x7" event={"ID":"48fc8af0-e30f-4f3f-88d3-8b054c6359ef","Type":"ContainerStarted","Data":"af7beef135dc284222c89d8da5556d80f3f65072f0a1d94b5d3c1cbd5f0ae59a"} Mar 20 16:00:02 crc kubenswrapper[4730]: I0320 16:00:02.411552 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567040-2zl4f" event={"ID":"97b63a10-b572-4a37-a2a4-079852aa2d3d","Type":"ContainerStarted","Data":"5160551b06a1e1f1e7b6689c4dec94a675729bc0e9edd00f1bb67e7f9a23750a"} Mar 20 16:00:02 crc kubenswrapper[4730]: I0320 16:00:02.428416 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567040-cz69b" event={"ID":"672cfda1-2ec8-41fe-b3dc-eabe4e60726d","Type":"ContainerStarted","Data":"4bf30bdbb0501e2c053e337b9252dd926d2eb7fdfb0b6f4828ca3c6d25528b11"} Mar 20 16:00:02 crc kubenswrapper[4730]: I0320 16:00:02.430305 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rmtsq" event={"ID":"e3349c2c-6f29-425d-9d25-b4f23821cfcc","Type":"ContainerStarted","Data":"89ef3de4f8d5002494a53f05fdcc4fa61cfc7cf388b35f48076aa3b98fc5e176"} Mar 20 16:00:02 crc kubenswrapper[4730]: I0320 16:00:02.433468 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-z9mtx" event={"ID":"09f27249-61fb-4e13-9eb9-9b804f256d81","Type":"ContainerStarted","Data":"3f4c141955a3579b06be021435ce1c3642e2a9b4483a932d05648a4559764229"} Mar 20 16:00:02 crc kubenswrapper[4730]: I0320 16:00:02.445230 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 16:00:02 crc kubenswrapper[4730]: I0320 16:00:02.515576 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 16:00:03 crc kubenswrapper[4730]: I0320 16:00:03.452585 4730 generic.go:334] "Generic (PLEG): container finished" podID="672cfda1-2ec8-41fe-b3dc-eabe4e60726d" containerID="aa12014b37ee0e01204777f8c797059805894b107ea52ba01e8a5d24299b55a5" exitCode=0 Mar 20 16:00:03 crc kubenswrapper[4730]: I0320 16:00:03.454659 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567040-cz69b" event={"ID":"672cfda1-2ec8-41fe-b3dc-eabe4e60726d","Type":"ContainerDied","Data":"aa12014b37ee0e01204777f8c797059805894b107ea52ba01e8a5d24299b55a5"} Mar 20 16:00:03 crc kubenswrapper[4730]: I0320 16:00:03.461256 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b821c271-d46c-4a68-a6ea-438e616c4d47","Type":"ContainerStarted","Data":"20e1876e1a0a25a88aae1c6629f93da770afea8c540f5629af87b68779bdafc7"} Mar 20 16:00:03 crc kubenswrapper[4730]: I0320 16:00:03.461696 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b821c271-d46c-4a68-a6ea-438e616c4d47","Type":"ContainerStarted","Data":"bbee271175baced0ec1f00dd406517b1f7f205b8c8460fadc05517bb59103028"} Mar 20 16:00:03 crc kubenswrapper[4730]: I0320 16:00:03.469474 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4458311-d050-4887-b4c7-6df6c993d66e","Type":"ContainerStarted","Data":"074644c96fd740e9300e0c15a05692e6097bf6107a8f64dc4cfd8d399bd7edb2"} Mar 20 16:00:03 crc kubenswrapper[4730]: I0320 16:00:03.473334 4730 generic.go:334] "Generic (PLEG): container finished" podID="82ffcdbb-cebb-443a-a8af-3c3543bea13d" containerID="8048a1935689b83c55f9f97ca86b535cb03200a9107add7dbb79c33ad2385a52" exitCode=0 Mar 20 16:00:03 crc kubenswrapper[4730]: I0320 16:00:03.473400 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" event={"ID":"82ffcdbb-cebb-443a-a8af-3c3543bea13d","Type":"ContainerDied","Data":"8048a1935689b83c55f9f97ca86b535cb03200a9107add7dbb79c33ad2385a52"} Mar 20 16:00:03 crc kubenswrapper[4730]: I0320 16:00:03.477883 4730 generic.go:334] "Generic (PLEG): container finished" podID="c857c34a-0efc-4ebe-8f42-e562e88de7a4" containerID="7c04756ee5298735d97672f2fb6129f8d01b04d3f8a464980ca1051cf42e4065" exitCode=0 Mar 20 16:00:03 crc kubenswrapper[4730]: I0320 16:00:03.479181 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-758cdcd5c9-9m42d" event={"ID":"c857c34a-0efc-4ebe-8f42-e562e88de7a4","Type":"ContainerDied","Data":"7c04756ee5298735d97672f2fb6129f8d01b04d3f8a464980ca1051cf42e4065"} Mar 20 16:00:03 crc kubenswrapper[4730]: I0320 16:00:03.532082 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-rmtsq" podStartSLOduration=4.53206494 podStartE2EDuration="4.53206494s" podCreationTimestamp="2026-03-20 15:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:00:03.518516237 +0000 UTC m=+1262.731887606" watchObservedRunningTime="2026-03-20 16:00:03.53206494 +0000 UTC m=+1262.745436309" Mar 20 16:00:03 crc kubenswrapper[4730]: I0320 16:00:03.565685 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-z9mtx" podStartSLOduration=4.565665052 podStartE2EDuration="4.565665052s" podCreationTimestamp="2026-03-20 15:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:00:03.561549759 +0000 UTC m=+1262.774921148" watchObservedRunningTime="2026-03-20 16:00:03.565665052 +0000 UTC m=+1262.779036421" Mar 20 16:00:04 crc kubenswrapper[4730]: I0320 16:00:04.482098 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-758cdcd5c9-9m42d" Mar 20 16:00:04 crc kubenswrapper[4730]: I0320 16:00:04.493522 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-758cdcd5c9-9m42d" Mar 20 16:00:04 crc kubenswrapper[4730]: I0320 16:00:04.493612 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-758cdcd5c9-9m42d" event={"ID":"c857c34a-0efc-4ebe-8f42-e562e88de7a4","Type":"ContainerDied","Data":"4c72e558e23a7c4c418bb2168aa9ecd0cbbacb82442ab392bdc56b6a65616fb6"} Mar 20 16:00:04 crc kubenswrapper[4730]: I0320 16:00:04.493947 4730 scope.go:117] "RemoveContainer" containerID="7c04756ee5298735d97672f2fb6129f8d01b04d3f8a464980ca1051cf42e4065" Mar 20 16:00:04 crc kubenswrapper[4730]: I0320 16:00:04.540920 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-ovsdbserver-nb\") pod \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\" (UID: \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\") " Mar 20 16:00:04 crc kubenswrapper[4730]: I0320 16:00:04.540992 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-ovsdbserver-sb\") pod \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\" (UID: \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\") " Mar 20 16:00:04 crc kubenswrapper[4730]: I0320 16:00:04.541032 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-config\") pod \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\" (UID: \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\") " Mar 20 16:00:04 crc kubenswrapper[4730]: I0320 16:00:04.541076 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-dns-swift-storage-0\") pod \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\" (UID: \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\") " Mar 20 16:00:04 crc kubenswrapper[4730]: I0320 16:00:04.541205 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4d8h\" (UniqueName: \"kubernetes.io/projected/c857c34a-0efc-4ebe-8f42-e562e88de7a4-kube-api-access-q4d8h\") pod \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\" (UID: \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\") " Mar 20 16:00:04 crc kubenswrapper[4730]: I0320 16:00:04.541353 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-dns-svc\") pod \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\" (UID: \"c857c34a-0efc-4ebe-8f42-e562e88de7a4\") " Mar 20 16:00:04 crc kubenswrapper[4730]: I0320 16:00:04.550398 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c857c34a-0efc-4ebe-8f42-e562e88de7a4-kube-api-access-q4d8h" (OuterVolumeSpecName: "kube-api-access-q4d8h") pod "c857c34a-0efc-4ebe-8f42-e562e88de7a4" (UID: "c857c34a-0efc-4ebe-8f42-e562e88de7a4"). InnerVolumeSpecName "kube-api-access-q4d8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:00:04 crc kubenswrapper[4730]: I0320 16:00:04.630892 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c857c34a-0efc-4ebe-8f42-e562e88de7a4" (UID: "c857c34a-0efc-4ebe-8f42-e562e88de7a4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:00:04 crc kubenswrapper[4730]: I0320 16:00:04.635872 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-config" (OuterVolumeSpecName: "config") pod "c857c34a-0efc-4ebe-8f42-e562e88de7a4" (UID: "c857c34a-0efc-4ebe-8f42-e562e88de7a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:00:04 crc kubenswrapper[4730]: I0320 16:00:04.643884 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:04 crc kubenswrapper[4730]: I0320 16:00:04.643911 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-config\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:04 crc kubenswrapper[4730]: I0320 16:00:04.643920 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4d8h\" (UniqueName: \"kubernetes.io/projected/c857c34a-0efc-4ebe-8f42-e562e88de7a4-kube-api-access-q4d8h\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:04 crc kubenswrapper[4730]: I0320 16:00:04.648011 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c857c34a-0efc-4ebe-8f42-e562e88de7a4" (UID: "c857c34a-0efc-4ebe-8f42-e562e88de7a4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:00:04 crc kubenswrapper[4730]: I0320 16:00:04.682065 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c857c34a-0efc-4ebe-8f42-e562e88de7a4" (UID: "c857c34a-0efc-4ebe-8f42-e562e88de7a4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:00:04 crc kubenswrapper[4730]: I0320 16:00:04.684190 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c857c34a-0efc-4ebe-8f42-e562e88de7a4" (UID: "c857c34a-0efc-4ebe-8f42-e562e88de7a4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:00:04 crc kubenswrapper[4730]: I0320 16:00:04.749664 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:04 crc kubenswrapper[4730]: I0320 16:00:04.749701 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:04 crc kubenswrapper[4730]: I0320 16:00:04.749731 4730 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c857c34a-0efc-4ebe-8f42-e562e88de7a4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:05 crc kubenswrapper[4730]: I0320 16:00:05.007700 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567040-cz69b" Mar 20 16:00:05 crc kubenswrapper[4730]: I0320 16:00:05.069027 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/672cfda1-2ec8-41fe-b3dc-eabe4e60726d-secret-volume\") pod \"672cfda1-2ec8-41fe-b3dc-eabe4e60726d\" (UID: \"672cfda1-2ec8-41fe-b3dc-eabe4e60726d\") " Mar 20 16:00:05 crc kubenswrapper[4730]: I0320 16:00:05.069257 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxpm7\" (UniqueName: \"kubernetes.io/projected/672cfda1-2ec8-41fe-b3dc-eabe4e60726d-kube-api-access-wxpm7\") pod \"672cfda1-2ec8-41fe-b3dc-eabe4e60726d\" (UID: \"672cfda1-2ec8-41fe-b3dc-eabe4e60726d\") " Mar 20 16:00:05 crc kubenswrapper[4730]: I0320 16:00:05.069302 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/672cfda1-2ec8-41fe-b3dc-eabe4e60726d-config-volume\") pod \"672cfda1-2ec8-41fe-b3dc-eabe4e60726d\" (UID: \"672cfda1-2ec8-41fe-b3dc-eabe4e60726d\") " Mar 20 16:00:05 crc kubenswrapper[4730]: I0320 16:00:05.071853 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/672cfda1-2ec8-41fe-b3dc-eabe4e60726d-config-volume" (OuterVolumeSpecName: "config-volume") pod "672cfda1-2ec8-41fe-b3dc-eabe4e60726d" (UID: "672cfda1-2ec8-41fe-b3dc-eabe4e60726d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:00:05 crc kubenswrapper[4730]: I0320 16:00:05.073633 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/672cfda1-2ec8-41fe-b3dc-eabe4e60726d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "672cfda1-2ec8-41fe-b3dc-eabe4e60726d" (UID: "672cfda1-2ec8-41fe-b3dc-eabe4e60726d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:05 crc kubenswrapper[4730]: I0320 16:00:05.100579 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/672cfda1-2ec8-41fe-b3dc-eabe4e60726d-kube-api-access-wxpm7" (OuterVolumeSpecName: "kube-api-access-wxpm7") pod "672cfda1-2ec8-41fe-b3dc-eabe4e60726d" (UID: "672cfda1-2ec8-41fe-b3dc-eabe4e60726d"). InnerVolumeSpecName "kube-api-access-wxpm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:00:05 crc kubenswrapper[4730]: I0320 16:00:05.129115 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-758cdcd5c9-9m42d"] Mar 20 16:00:05 crc kubenswrapper[4730]: I0320 16:00:05.150368 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-758cdcd5c9-9m42d"] Mar 20 16:00:05 crc kubenswrapper[4730]: I0320 16:00:05.176391 4730 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/672cfda1-2ec8-41fe-b3dc-eabe4e60726d-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:05 crc kubenswrapper[4730]: I0320 16:00:05.176425 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxpm7\" (UniqueName: \"kubernetes.io/projected/672cfda1-2ec8-41fe-b3dc-eabe4e60726d-kube-api-access-wxpm7\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:05 crc kubenswrapper[4730]: I0320 16:00:05.176438 4730 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/672cfda1-2ec8-41fe-b3dc-eabe4e60726d-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:05 crc kubenswrapper[4730]: I0320 16:00:05.511896 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" event={"ID":"82ffcdbb-cebb-443a-a8af-3c3543bea13d","Type":"ContainerStarted","Data":"a42cd9a54cab5432c7c6a61bb8a81e8c16b97e8e15d954234ce03c8ac58b65f1"} Mar 20 16:00:05 crc kubenswrapper[4730]: I0320 16:00:05.513272 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" Mar 20 16:00:05 crc kubenswrapper[4730]: I0320 16:00:05.519173 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"468631ad-821b-469e-a166-1d32d370e5fa","Type":"ContainerStarted","Data":"3f2599acfa566a2a6933c070fb527ae037e48d58578163cf559260ee0ee91126"} Mar 20 16:00:05 crc kubenswrapper[4730]: I0320 16:00:05.519406 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="468631ad-821b-469e-a166-1d32d370e5fa" containerName="watcher-api-log" containerID="cri-o://f7dc5000437c6d89457d868ca7a318170322b81dfd62eee1c3bc3df85ceba4e5" gracePeriod=30 Mar 20 16:00:05 crc kubenswrapper[4730]: I0320 16:00:05.519874 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="468631ad-821b-469e-a166-1d32d370e5fa" containerName="watcher-api" containerID="cri-o://3f2599acfa566a2a6933c070fb527ae037e48d58578163cf559260ee0ee91126" gracePeriod=30 Mar 20 16:00:05 crc kubenswrapper[4730]: I0320 16:00:05.520014 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Mar 20 16:00:05 crc kubenswrapper[4730]: I0320 16:00:05.521446 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="468631ad-821b-469e-a166-1d32d370e5fa" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.155:9322/\": dial tcp 10.217.0.155:9322: connect: connection refused" Mar 20 16:00:05 crc kubenswrapper[4730]: I0320 16:00:05.527114 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567040-cz69b" event={"ID":"672cfda1-2ec8-41fe-b3dc-eabe4e60726d","Type":"ContainerDied","Data":"4bf30bdbb0501e2c053e337b9252dd926d2eb7fdfb0b6f4828ca3c6d25528b11"} Mar 20 16:00:05 crc kubenswrapper[4730]: I0320 16:00:05.527160 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bf30bdbb0501e2c053e337b9252dd926d2eb7fdfb0b6f4828ca3c6d25528b11" Mar 20 16:00:05 crc kubenswrapper[4730]: I0320 16:00:05.527212 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567040-cz69b" Mar 20 16:00:05 crc kubenswrapper[4730]: I0320 16:00:05.546819 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" podStartSLOduration=6.546796606 podStartE2EDuration="6.546796606s" podCreationTimestamp="2026-03-20 15:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:00:05.531982135 +0000 UTC m=+1264.745353504" watchObservedRunningTime="2026-03-20 16:00:05.546796606 +0000 UTC m=+1264.760167975" Mar 20 16:00:05 crc kubenswrapper[4730]: I0320 16:00:05.560861 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=6.56084403 podStartE2EDuration="6.56084403s" podCreationTimestamp="2026-03-20 15:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:00:05.554984459 +0000 UTC m=+1264.768355828" watchObservedRunningTime="2026-03-20 16:00:05.56084403 +0000 UTC m=+1264.774215399" Mar 20 16:00:05 crc kubenswrapper[4730]: I0320 16:00:05.571215 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c857c34a-0efc-4ebe-8f42-e562e88de7a4" path="/var/lib/kubelet/pods/c857c34a-0efc-4ebe-8f42-e562e88de7a4/volumes" Mar 20 16:00:05 crc kubenswrapper[4730]: I0320 16:00:05.573503 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"9f653a7b-251e-4eb2-92cd-74e23ac4dba5","Type":"ContainerStarted","Data":"859fbb7a55a48ffe4a6d03732d3cc6088c3d226367ded637f27bf24936c41dba"} Mar 20 16:00:05 crc kubenswrapper[4730]: I0320 16:00:05.576770 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3f6c808e-d523-48bd-8ec2-28b625834317","Type":"ContainerStarted","Data":"c057d3a2f6ef1e71f3dc2bc7abb9c07a0dfab9e5b78bf4eb4546276a24c2d109"} Mar 20 16:00:05 crc kubenswrapper[4730]: I0320 16:00:05.597185 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=2.971820327 podStartE2EDuration="6.597162422s" podCreationTimestamp="2026-03-20 15:59:59 +0000 UTC" firstStartedPulling="2026-03-20 16:00:00.863683595 +0000 UTC m=+1260.077054964" lastFinishedPulling="2026-03-20 16:00:04.48902569 +0000 UTC m=+1263.702397059" observedRunningTime="2026-03-20 16:00:05.590883862 +0000 UTC m=+1264.804255241" watchObservedRunningTime="2026-03-20 16:00:05.597162422 +0000 UTC m=+1264.810533791" Mar 20 16:00:06 crc kubenswrapper[4730]: I0320 16:00:06.048217 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=3.256383911 podStartE2EDuration="7.048200439s" podCreationTimestamp="2026-03-20 15:59:59 +0000 UTC" firstStartedPulling="2026-03-20 16:00:00.85675141 +0000 UTC m=+1260.070122779" lastFinishedPulling="2026-03-20 16:00:04.648567938 +0000 UTC m=+1263.861939307" observedRunningTime="2026-03-20 16:00:05.619608484 +0000 UTC m=+1264.832979853" watchObservedRunningTime="2026-03-20 16:00:06.048200439 +0000 UTC m=+1265.261571808" Mar 20 16:00:06 crc kubenswrapper[4730]: I0320 16:00:06.587897 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b821c271-d46c-4a68-a6ea-438e616c4d47","Type":"ContainerStarted","Data":"5b6df27b1ba23b9df05a1fffaaba05b754c2fb2f90da0b4adb5478040787f576"} Mar 20 16:00:06 crc kubenswrapper[4730]: I0320 16:00:06.588003 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b821c271-d46c-4a68-a6ea-438e616c4d47" containerName="glance-log" containerID="cri-o://20e1876e1a0a25a88aae1c6629f93da770afea8c540f5629af87b68779bdafc7" gracePeriod=30 Mar 20 16:00:06 crc kubenswrapper[4730]: I0320 16:00:06.588039 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b821c271-d46c-4a68-a6ea-438e616c4d47" containerName="glance-httpd" containerID="cri-o://5b6df27b1ba23b9df05a1fffaaba05b754c2fb2f90da0b4adb5478040787f576" gracePeriod=30 Mar 20 16:00:06 crc kubenswrapper[4730]: I0320 16:00:06.592194 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4458311-d050-4887-b4c7-6df6c993d66e","Type":"ContainerStarted","Data":"9a79a1bc9a5cd84e40618d9d5567e18d723a975ff2aadedb78d41e84461ef193"} Mar 20 16:00:06 crc kubenswrapper[4730]: I0320 16:00:06.592326 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c4458311-d050-4887-b4c7-6df6c993d66e" containerName="glance-httpd" containerID="cri-o://9a79a1bc9a5cd84e40618d9d5567e18d723a975ff2aadedb78d41e84461ef193" gracePeriod=30 Mar 20 16:00:06 crc kubenswrapper[4730]: I0320 16:00:06.592485 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c4458311-d050-4887-b4c7-6df6c993d66e" containerName="glance-log" containerID="cri-o://074644c96fd740e9300e0c15a05692e6097bf6107a8f64dc4cfd8d399bd7edb2" gracePeriod=30 Mar 20 16:00:06 crc kubenswrapper[4730]: I0320 16:00:06.595454 4730 generic.go:334] "Generic (PLEG): container finished" podID="468631ad-821b-469e-a166-1d32d370e5fa" containerID="f7dc5000437c6d89457d868ca7a318170322b81dfd62eee1c3bc3df85ceba4e5" exitCode=143 Mar 20 16:00:06 crc kubenswrapper[4730]: I0320 16:00:06.595557 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"468631ad-821b-469e-a166-1d32d370e5fa","Type":"ContainerDied","Data":"f7dc5000437c6d89457d868ca7a318170322b81dfd62eee1c3bc3df85ceba4e5"} Mar 20 16:00:06 crc kubenswrapper[4730]: I0320 16:00:06.618750 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.618733948 podStartE2EDuration="7.618733948s" podCreationTimestamp="2026-03-20 15:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:00:06.617217825 +0000 UTC m=+1265.830589194" watchObservedRunningTime="2026-03-20 16:00:06.618733948 +0000 UTC m=+1265.832105317" Mar 20 16:00:06 crc kubenswrapper[4730]: I0320 16:00:06.651677 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.651659475 podStartE2EDuration="7.651659475s" podCreationTimestamp="2026-03-20 15:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:00:06.644409083 +0000 UTC m=+1265.857780452" watchObservedRunningTime="2026-03-20 16:00:06.651659475 +0000 UTC m=+1265.865030834" Mar 20 16:00:07 crc kubenswrapper[4730]: I0320 16:00:07.607781 4730 generic.go:334] "Generic (PLEG): container finished" podID="c4458311-d050-4887-b4c7-6df6c993d66e" containerID="9a79a1bc9a5cd84e40618d9d5567e18d723a975ff2aadedb78d41e84461ef193" exitCode=0 Mar 20 16:00:07 crc kubenswrapper[4730]: I0320 16:00:07.607828 4730 generic.go:334] "Generic (PLEG): container finished" podID="c4458311-d050-4887-b4c7-6df6c993d66e" containerID="074644c96fd740e9300e0c15a05692e6097bf6107a8f64dc4cfd8d399bd7edb2" exitCode=143 Mar 20 16:00:07 crc kubenswrapper[4730]: I0320 16:00:07.607890 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4458311-d050-4887-b4c7-6df6c993d66e","Type":"ContainerDied","Data":"9a79a1bc9a5cd84e40618d9d5567e18d723a975ff2aadedb78d41e84461ef193"} Mar 20 16:00:07 crc kubenswrapper[4730]: I0320 16:00:07.607919 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4458311-d050-4887-b4c7-6df6c993d66e","Type":"ContainerDied","Data":"074644c96fd740e9300e0c15a05692e6097bf6107a8f64dc4cfd8d399bd7edb2"} Mar 20 16:00:07 crc kubenswrapper[4730]: I0320 16:00:07.615057 4730 generic.go:334] "Generic (PLEG): container finished" podID="b821c271-d46c-4a68-a6ea-438e616c4d47" containerID="5b6df27b1ba23b9df05a1fffaaba05b754c2fb2f90da0b4adb5478040787f576" exitCode=0 Mar 20 16:00:07 crc kubenswrapper[4730]: I0320 16:00:07.615091 4730 generic.go:334] "Generic (PLEG): container finished" podID="b821c271-d46c-4a68-a6ea-438e616c4d47" containerID="20e1876e1a0a25a88aae1c6629f93da770afea8c540f5629af87b68779bdafc7" exitCode=143 Mar 20 16:00:07 crc kubenswrapper[4730]: I0320 16:00:07.615125 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b821c271-d46c-4a68-a6ea-438e616c4d47","Type":"ContainerDied","Data":"5b6df27b1ba23b9df05a1fffaaba05b754c2fb2f90da0b4adb5478040787f576"} Mar 20 16:00:07 crc kubenswrapper[4730]: I0320 16:00:07.615162 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b821c271-d46c-4a68-a6ea-438e616c4d47","Type":"ContainerDied","Data":"20e1876e1a0a25a88aae1c6629f93da770afea8c540f5629af87b68779bdafc7"} Mar 20 16:00:08 crc kubenswrapper[4730]: I0320 16:00:08.624794 4730 generic.go:334] "Generic (PLEG): container finished" podID="e3349c2c-6f29-425d-9d25-b4f23821cfcc" containerID="89ef3de4f8d5002494a53f05fdcc4fa61cfc7cf388b35f48076aa3b98fc5e176" exitCode=0 Mar 20 16:00:08 crc kubenswrapper[4730]: I0320 16:00:08.625138 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rmtsq" event={"ID":"e3349c2c-6f29-425d-9d25-b4f23821cfcc","Type":"ContainerDied","Data":"89ef3de4f8d5002494a53f05fdcc4fa61cfc7cf388b35f48076aa3b98fc5e176"} Mar 20 16:00:09 crc kubenswrapper[4730]: I0320 16:00:09.636805 4730 generic.go:334] "Generic (PLEG): container finished" podID="3f6c808e-d523-48bd-8ec2-28b625834317" containerID="c057d3a2f6ef1e71f3dc2bc7abb9c07a0dfab9e5b78bf4eb4546276a24c2d109" exitCode=1 Mar 20 16:00:09 crc kubenswrapper[4730]: I0320 16:00:09.637073 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3f6c808e-d523-48bd-8ec2-28b625834317","Type":"ContainerDied","Data":"c057d3a2f6ef1e71f3dc2bc7abb9c07a0dfab9e5b78bf4eb4546276a24c2d109"} Mar 20 16:00:09 crc kubenswrapper[4730]: I0320 16:00:09.637922 4730 scope.go:117] "RemoveContainer" containerID="c057d3a2f6ef1e71f3dc2bc7abb9c07a0dfab9e5b78bf4eb4546276a24c2d109" Mar 20 16:00:09 crc kubenswrapper[4730]: I0320 16:00:09.964945 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Mar 20 16:00:09 crc kubenswrapper[4730]: I0320 16:00:09.965222 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Mar 20 16:00:09 crc kubenswrapper[4730]: I0320 16:00:09.974644 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 20 16:00:09 crc kubenswrapper[4730]: I0320 16:00:09.974673 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 20 16:00:10 crc kubenswrapper[4730]: I0320 16:00:10.000154 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Mar 20 16:00:10 crc kubenswrapper[4730]: I0320 16:00:10.272446 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Mar 20 16:00:10 crc kubenswrapper[4730]: I0320 16:00:10.540661 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" Mar 20 16:00:10 crc kubenswrapper[4730]: I0320 16:00:10.630002 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59fc649cc7-tct2h"] Mar 20 16:00:10 crc kubenswrapper[4730]: I0320 16:00:10.631662 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" podUID="37d31419-eada-4b93-bc20-bac232ced058" containerName="dnsmasq-dns" containerID="cri-o://bb5b7167e8f80b19b6b3c7bf7988748aafee8cd2702715c1020def5b6b2b9fb6" gracePeriod=10 Mar 20 16:00:10 crc kubenswrapper[4730]: I0320 16:00:10.715971 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Mar 20 16:00:10 crc kubenswrapper[4730]: I0320 16:00:10.759637 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Mar 20 16:00:11 crc kubenswrapper[4730]: I0320 16:00:11.423102 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" podUID="37d31419-eada-4b93-bc20-bac232ced058" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: connect: connection refused" Mar 20 16:00:11 crc kubenswrapper[4730]: I0320 16:00:11.670098 4730 generic.go:334] "Generic (PLEG): container finished" podID="37d31419-eada-4b93-bc20-bac232ced058" containerID="bb5b7167e8f80b19b6b3c7bf7988748aafee8cd2702715c1020def5b6b2b9fb6" exitCode=0 Mar 20 16:00:11 crc kubenswrapper[4730]: I0320 16:00:11.670314 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" event={"ID":"37d31419-eada-4b93-bc20-bac232ced058","Type":"ContainerDied","Data":"bb5b7167e8f80b19b6b3c7bf7988748aafee8cd2702715c1020def5b6b2b9fb6"} Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.200846 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.213644 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.330754 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"c4458311-d050-4887-b4c7-6df6c993d66e\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.330805 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4458311-d050-4887-b4c7-6df6c993d66e-scripts\") pod \"c4458311-d050-4887-b4c7-6df6c993d66e\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.330835 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b821c271-d46c-4a68-a6ea-438e616c4d47-config-data\") pod \"b821c271-d46c-4a68-a6ea-438e616c4d47\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.330869 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"b821c271-d46c-4a68-a6ea-438e616c4d47\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.330899 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b821c271-d46c-4a68-a6ea-438e616c4d47-public-tls-certs\") pod \"b821c271-d46c-4a68-a6ea-438e616c4d47\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.331010 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgxjd\" (UniqueName: \"kubernetes.io/projected/c4458311-d050-4887-b4c7-6df6c993d66e-kube-api-access-jgxjd\") pod \"c4458311-d050-4887-b4c7-6df6c993d66e\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.331042 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4t8c\" (UniqueName: \"kubernetes.io/projected/b821c271-d46c-4a68-a6ea-438e616c4d47-kube-api-access-g4t8c\") pod \"b821c271-d46c-4a68-a6ea-438e616c4d47\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.331132 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4458311-d050-4887-b4c7-6df6c993d66e-httpd-run\") pod \"c4458311-d050-4887-b4c7-6df6c993d66e\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.335239 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4458311-d050-4887-b4c7-6df6c993d66e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c4458311-d050-4887-b4c7-6df6c993d66e" (UID: "c4458311-d050-4887-b4c7-6df6c993d66e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.335427 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b821c271-d46c-4a68-a6ea-438e616c4d47-httpd-run\") pod \"b821c271-d46c-4a68-a6ea-438e616c4d47\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.335516 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b821c271-d46c-4a68-a6ea-438e616c4d47-scripts\") pod \"b821c271-d46c-4a68-a6ea-438e616c4d47\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.336799 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b821c271-d46c-4a68-a6ea-438e616c4d47-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b821c271-d46c-4a68-a6ea-438e616c4d47" (UID: "b821c271-d46c-4a68-a6ea-438e616c4d47"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.336873 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b821c271-d46c-4a68-a6ea-438e616c4d47-logs\") pod \"b821c271-d46c-4a68-a6ea-438e616c4d47\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.337319 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4458311-d050-4887-b4c7-6df6c993d66e-scripts" (OuterVolumeSpecName: "scripts") pod "c4458311-d050-4887-b4c7-6df6c993d66e" (UID: "c4458311-d050-4887-b4c7-6df6c993d66e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.338824 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "c4458311-d050-4887-b4c7-6df6c993d66e" (UID: "c4458311-d050-4887-b4c7-6df6c993d66e"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.339341 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4458311-d050-4887-b4c7-6df6c993d66e-logs" (OuterVolumeSpecName: "logs") pod "c4458311-d050-4887-b4c7-6df6c993d66e" (UID: "c4458311-d050-4887-b4c7-6df6c993d66e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.339769 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "b821c271-d46c-4a68-a6ea-438e616c4d47" (UID: "b821c271-d46c-4a68-a6ea-438e616c4d47"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.340230 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b821c271-d46c-4a68-a6ea-438e616c4d47-kube-api-access-g4t8c" (OuterVolumeSpecName: "kube-api-access-g4t8c") pod "b821c271-d46c-4a68-a6ea-438e616c4d47" (UID: "b821c271-d46c-4a68-a6ea-438e616c4d47"). InnerVolumeSpecName "kube-api-access-g4t8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.342989 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4458311-d050-4887-b4c7-6df6c993d66e-kube-api-access-jgxjd" (OuterVolumeSpecName: "kube-api-access-jgxjd") pod "c4458311-d050-4887-b4c7-6df6c993d66e" (UID: "c4458311-d050-4887-b4c7-6df6c993d66e"). InnerVolumeSpecName "kube-api-access-jgxjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.344278 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b821c271-d46c-4a68-a6ea-438e616c4d47-logs" (OuterVolumeSpecName: "logs") pod "b821c271-d46c-4a68-a6ea-438e616c4d47" (UID: "b821c271-d46c-4a68-a6ea-438e616c4d47"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.344357 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4458311-d050-4887-b4c7-6df6c993d66e-logs\") pod \"c4458311-d050-4887-b4c7-6df6c993d66e\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.344428 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4458311-d050-4887-b4c7-6df6c993d66e-combined-ca-bundle\") pod \"c4458311-d050-4887-b4c7-6df6c993d66e\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.344493 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b821c271-d46c-4a68-a6ea-438e616c4d47-combined-ca-bundle\") pod \"b821c271-d46c-4a68-a6ea-438e616c4d47\" (UID: \"b821c271-d46c-4a68-a6ea-438e616c4d47\") " Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.344527 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4458311-d050-4887-b4c7-6df6c993d66e-config-data\") pod \"c4458311-d050-4887-b4c7-6df6c993d66e\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.344561 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4458311-d050-4887-b4c7-6df6c993d66e-internal-tls-certs\") pod \"c4458311-d050-4887-b4c7-6df6c993d66e\" (UID: \"c4458311-d050-4887-b4c7-6df6c993d66e\") " Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.345975 4730 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.345999 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4458311-d050-4887-b4c7-6df6c993d66e-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.346016 4730 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.346028 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgxjd\" (UniqueName: \"kubernetes.io/projected/c4458311-d050-4887-b4c7-6df6c993d66e-kube-api-access-jgxjd\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.346047 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4t8c\" (UniqueName: \"kubernetes.io/projected/b821c271-d46c-4a68-a6ea-438e616c4d47-kube-api-access-g4t8c\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.346058 4730 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4458311-d050-4887-b4c7-6df6c993d66e-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.346068 4730 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b821c271-d46c-4a68-a6ea-438e616c4d47-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.346078 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b821c271-d46c-4a68-a6ea-438e616c4d47-logs\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.346088 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4458311-d050-4887-b4c7-6df6c993d66e-logs\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.372610 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b821c271-d46c-4a68-a6ea-438e616c4d47-scripts" (OuterVolumeSpecName: "scripts") pod "b821c271-d46c-4a68-a6ea-438e616c4d47" (UID: "b821c271-d46c-4a68-a6ea-438e616c4d47"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.377676 4730 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.380730 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4458311-d050-4887-b4c7-6df6c993d66e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4458311-d050-4887-b4c7-6df6c993d66e" (UID: "c4458311-d050-4887-b4c7-6df6c993d66e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.386521 4730 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.414496 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b821c271-d46c-4a68-a6ea-438e616c4d47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b821c271-d46c-4a68-a6ea-438e616c4d47" (UID: "b821c271-d46c-4a68-a6ea-438e616c4d47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.414612 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4458311-d050-4887-b4c7-6df6c993d66e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c4458311-d050-4887-b4c7-6df6c993d66e" (UID: "c4458311-d050-4887-b4c7-6df6c993d66e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.421631 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4458311-d050-4887-b4c7-6df6c993d66e-config-data" (OuterVolumeSpecName: "config-data") pod "c4458311-d050-4887-b4c7-6df6c993d66e" (UID: "c4458311-d050-4887-b4c7-6df6c993d66e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.428376 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b821c271-d46c-4a68-a6ea-438e616c4d47-config-data" (OuterVolumeSpecName: "config-data") pod "b821c271-d46c-4a68-a6ea-438e616c4d47" (UID: "b821c271-d46c-4a68-a6ea-438e616c4d47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.436331 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b821c271-d46c-4a68-a6ea-438e616c4d47-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b821c271-d46c-4a68-a6ea-438e616c4d47" (UID: "b821c271-d46c-4a68-a6ea-438e616c4d47"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.447954 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b821c271-d46c-4a68-a6ea-438e616c4d47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.447995 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4458311-d050-4887-b4c7-6df6c993d66e-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.448008 4730 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4458311-d050-4887-b4c7-6df6c993d66e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.448021 4730 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.448033 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b821c271-d46c-4a68-a6ea-438e616c4d47-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.448043 4730 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.448055 4730 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b821c271-d46c-4a68-a6ea-438e616c4d47-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.448065 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b821c271-d46c-4a68-a6ea-438e616c4d47-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.448075 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4458311-d050-4887-b4c7-6df6c993d66e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.682168 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4458311-d050-4887-b4c7-6df6c993d66e","Type":"ContainerDied","Data":"fe390ee094d1d3c5ad2dd1bacd57dfb2a471207a757c9831b79190ce13256f10"} Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.682225 4730 scope.go:117] "RemoveContainer" containerID="9a79a1bc9a5cd84e40618d9d5567e18d723a975ff2aadedb78d41e84461ef193" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.682346 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.685303 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="9f653a7b-251e-4eb2-92cd-74e23ac4dba5" containerName="watcher-applier" containerID="cri-o://859fbb7a55a48ffe4a6d03732d3cc6088c3d226367ded637f27bf24936c41dba" gracePeriod=30 Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.685427 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.686201 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b821c271-d46c-4a68-a6ea-438e616c4d47","Type":"ContainerDied","Data":"bbee271175baced0ec1f00dd406517b1f7f205b8c8460fadc05517bb59103028"} Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.734924 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.759112 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.781765 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.796988 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.816156 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 16:00:12 crc kubenswrapper[4730]: E0320 16:00:12.817774 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4458311-d050-4887-b4c7-6df6c993d66e" containerName="glance-httpd" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.817794 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4458311-d050-4887-b4c7-6df6c993d66e" containerName="glance-httpd" Mar 20 16:00:12 crc kubenswrapper[4730]: E0320 16:00:12.817812 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="672cfda1-2ec8-41fe-b3dc-eabe4e60726d" containerName="collect-profiles" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.817818 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="672cfda1-2ec8-41fe-b3dc-eabe4e60726d" containerName="collect-profiles" Mar 20 16:00:12 crc kubenswrapper[4730]: E0320 16:00:12.817830 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b821c271-d46c-4a68-a6ea-438e616c4d47" containerName="glance-log" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.817837 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="b821c271-d46c-4a68-a6ea-438e616c4d47" containerName="glance-log" Mar 20 16:00:12 crc kubenswrapper[4730]: E0320 16:00:12.817848 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b821c271-d46c-4a68-a6ea-438e616c4d47" containerName="glance-httpd" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.817854 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="b821c271-d46c-4a68-a6ea-438e616c4d47" containerName="glance-httpd" Mar 20 16:00:12 crc kubenswrapper[4730]: E0320 16:00:12.817868 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c857c34a-0efc-4ebe-8f42-e562e88de7a4" containerName="init" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.817874 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c857c34a-0efc-4ebe-8f42-e562e88de7a4" containerName="init" Mar 20 16:00:12 crc kubenswrapper[4730]: E0320 16:00:12.817893 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4458311-d050-4887-b4c7-6df6c993d66e" containerName="glance-log" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.817900 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4458311-d050-4887-b4c7-6df6c993d66e" containerName="glance-log" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.818070 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c857c34a-0efc-4ebe-8f42-e562e88de7a4" containerName="init" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.818081 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="672cfda1-2ec8-41fe-b3dc-eabe4e60726d" containerName="collect-profiles" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.818092 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4458311-d050-4887-b4c7-6df6c993d66e" containerName="glance-log" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.818101 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="b821c271-d46c-4a68-a6ea-438e616c4d47" containerName="glance-httpd" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.818113 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4458311-d050-4887-b4c7-6df6c993d66e" containerName="glance-httpd" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.818121 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="b821c271-d46c-4a68-a6ea-438e616c4d47" containerName="glance-log" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.818964 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.819048 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.826450 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-b8g88" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.827360 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.827590 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.829112 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.853003 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.857401 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.859642 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.860844 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.887581 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.964034 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82401c9f-f5f9-4bc6-a085-c89d3632493e-scripts\") pod \"glance-default-external-api-0\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.964118 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.964156 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/82401c9f-f5f9-4bc6-a085-c89d3632493e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.964391 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw75x\" (UniqueName: \"kubernetes.io/projected/82401c9f-f5f9-4bc6-a085-c89d3632493e-kube-api-access-pw75x\") pod \"glance-default-external-api-0\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.964474 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82401c9f-f5f9-4bc6-a085-c89d3632493e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.964529 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82401c9f-f5f9-4bc6-a085-c89d3632493e-config-data\") pod \"glance-default-external-api-0\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.964553 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82401c9f-f5f9-4bc6-a085-c89d3632493e-logs\") pod \"glance-default-external-api-0\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.964630 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82401c9f-f5f9-4bc6-a085-c89d3632493e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.992160 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rmtsq" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.993910 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" Mar 20 16:00:12 crc kubenswrapper[4730]: I0320 16:00:12.998297 4730 scope.go:117] "RemoveContainer" containerID="074644c96fd740e9300e0c15a05692e6097bf6107a8f64dc4cfd8d399bd7edb2" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.066964 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14cdd4b7-7a81-469f-ae2f-104b054cc583-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.067032 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82401c9f-f5f9-4bc6-a085-c89d3632493e-config-data\") pod \"glance-default-external-api-0\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.067054 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82401c9f-f5f9-4bc6-a085-c89d3632493e-logs\") pod \"glance-default-external-api-0\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.067078 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14cdd4b7-7a81-469f-ae2f-104b054cc583-scripts\") pod \"glance-default-internal-api-0\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.067109 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82401c9f-f5f9-4bc6-a085-c89d3632493e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.067137 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.067168 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14cdd4b7-7a81-469f-ae2f-104b054cc583-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.067210 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14cdd4b7-7a81-469f-ae2f-104b054cc583-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.067237 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82401c9f-f5f9-4bc6-a085-c89d3632493e-scripts\") pod \"glance-default-external-api-0\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.067287 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14cdd4b7-7a81-469f-ae2f-104b054cc583-logs\") pod \"glance-default-internal-api-0\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.067315 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.067352 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/82401c9f-f5f9-4bc6-a085-c89d3632493e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.067393 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-566vl\" (UniqueName: \"kubernetes.io/projected/14cdd4b7-7a81-469f-ae2f-104b054cc583-kube-api-access-566vl\") pod \"glance-default-internal-api-0\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.067415 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw75x\" (UniqueName: \"kubernetes.io/projected/82401c9f-f5f9-4bc6-a085-c89d3632493e-kube-api-access-pw75x\") pod \"glance-default-external-api-0\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.067467 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82401c9f-f5f9-4bc6-a085-c89d3632493e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.067498 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14cdd4b7-7a81-469f-ae2f-104b054cc583-config-data\") pod \"glance-default-internal-api-0\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.067920 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82401c9f-f5f9-4bc6-a085-c89d3632493e-logs\") pod \"glance-default-external-api-0\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.068214 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.068323 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/82401c9f-f5f9-4bc6-a085-c89d3632493e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.075301 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82401c9f-f5f9-4bc6-a085-c89d3632493e-config-data\") pod \"glance-default-external-api-0\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.082503 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82401c9f-f5f9-4bc6-a085-c89d3632493e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.083825 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82401c9f-f5f9-4bc6-a085-c89d3632493e-scripts\") pod \"glance-default-external-api-0\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.088449 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw75x\" (UniqueName: \"kubernetes.io/projected/82401c9f-f5f9-4bc6-a085-c89d3632493e-kube-api-access-pw75x\") pod \"glance-default-external-api-0\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.107016 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82401c9f-f5f9-4bc6-a085-c89d3632493e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.113118 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " pod="openstack/glance-default-external-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.123999 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.168838 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-dns-swift-storage-0\") pod \"37d31419-eada-4b93-bc20-bac232ced058\" (UID: \"37d31419-eada-4b93-bc20-bac232ced058\") " Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.168919 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-ovsdbserver-nb\") pod \"37d31419-eada-4b93-bc20-bac232ced058\" (UID: \"37d31419-eada-4b93-bc20-bac232ced058\") " Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.168960 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-fernet-keys\") pod \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\" (UID: \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\") " Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.168991 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj56l\" (UniqueName: \"kubernetes.io/projected/e3349c2c-6f29-425d-9d25-b4f23821cfcc-kube-api-access-qj56l\") pod \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\" (UID: \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\") " Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.169048 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x9cc\" (UniqueName: \"kubernetes.io/projected/37d31419-eada-4b93-bc20-bac232ced058-kube-api-access-6x9cc\") pod \"37d31419-eada-4b93-bc20-bac232ced058\" (UID: \"37d31419-eada-4b93-bc20-bac232ced058\") " Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.169080 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-scripts\") pod \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\" (UID: \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\") " Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.169126 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-dns-svc\") pod \"37d31419-eada-4b93-bc20-bac232ced058\" (UID: \"37d31419-eada-4b93-bc20-bac232ced058\") " Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.169164 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-config-data\") pod \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\" (UID: \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\") " Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.169211 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-combined-ca-bundle\") pod \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\" (UID: \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\") " Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.169258 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-credential-keys\") pod \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\" (UID: \"e3349c2c-6f29-425d-9d25-b4f23821cfcc\") " Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.169374 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-ovsdbserver-sb\") pod \"37d31419-eada-4b93-bc20-bac232ced058\" (UID: \"37d31419-eada-4b93-bc20-bac232ced058\") " Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.169401 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-config\") pod \"37d31419-eada-4b93-bc20-bac232ced058\" (UID: \"37d31419-eada-4b93-bc20-bac232ced058\") " Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.169676 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14cdd4b7-7a81-469f-ae2f-104b054cc583-config-data\") pod \"glance-default-internal-api-0\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.169731 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14cdd4b7-7a81-469f-ae2f-104b054cc583-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.169768 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14cdd4b7-7a81-469f-ae2f-104b054cc583-scripts\") pod \"glance-default-internal-api-0\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.169810 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.169847 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14cdd4b7-7a81-469f-ae2f-104b054cc583-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.169893 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14cdd4b7-7a81-469f-ae2f-104b054cc583-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.169943 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14cdd4b7-7a81-469f-ae2f-104b054cc583-logs\") pod \"glance-default-internal-api-0\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.170009 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-566vl\" (UniqueName: \"kubernetes.io/projected/14cdd4b7-7a81-469f-ae2f-104b054cc583-kube-api-access-566vl\") pod \"glance-default-internal-api-0\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.171404 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.171931 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14cdd4b7-7a81-469f-ae2f-104b054cc583-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.172241 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14cdd4b7-7a81-469f-ae2f-104b054cc583-logs\") pod \"glance-default-internal-api-0\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.174405 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-scripts" (OuterVolumeSpecName: "scripts") pod "e3349c2c-6f29-425d-9d25-b4f23821cfcc" (UID: "e3349c2c-6f29-425d-9d25-b4f23821cfcc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.174572 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.179617 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14cdd4b7-7a81-469f-ae2f-104b054cc583-config-data\") pod \"glance-default-internal-api-0\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.187455 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e3349c2c-6f29-425d-9d25-b4f23821cfcc" (UID: "e3349c2c-6f29-425d-9d25-b4f23821cfcc"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.188948 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14cdd4b7-7a81-469f-ae2f-104b054cc583-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.189030 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14cdd4b7-7a81-469f-ae2f-104b054cc583-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.189614 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3349c2c-6f29-425d-9d25-b4f23821cfcc-kube-api-access-qj56l" (OuterVolumeSpecName: "kube-api-access-qj56l") pod "e3349c2c-6f29-425d-9d25-b4f23821cfcc" (UID: "e3349c2c-6f29-425d-9d25-b4f23821cfcc"). InnerVolumeSpecName "kube-api-access-qj56l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.190264 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37d31419-eada-4b93-bc20-bac232ced058-kube-api-access-6x9cc" (OuterVolumeSpecName: "kube-api-access-6x9cc") pod "37d31419-eada-4b93-bc20-bac232ced058" (UID: "37d31419-eada-4b93-bc20-bac232ced058"). InnerVolumeSpecName "kube-api-access-6x9cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.190532 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e3349c2c-6f29-425d-9d25-b4f23821cfcc" (UID: "e3349c2c-6f29-425d-9d25-b4f23821cfcc"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.191749 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14cdd4b7-7a81-469f-ae2f-104b054cc583-scripts\") pod \"glance-default-internal-api-0\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.205717 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-566vl\" (UniqueName: \"kubernetes.io/projected/14cdd4b7-7a81-469f-ae2f-104b054cc583-kube-api-access-566vl\") pod \"glance-default-internal-api-0\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.213375 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.224667 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3349c2c-6f29-425d-9d25-b4f23821cfcc" (UID: "e3349c2c-6f29-425d-9d25-b4f23821cfcc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.231025 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "37d31419-eada-4b93-bc20-bac232ced058" (UID: "37d31419-eada-4b93-bc20-bac232ced058"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.233230 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "37d31419-eada-4b93-bc20-bac232ced058" (UID: "37d31419-eada-4b93-bc20-bac232ced058"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.242419 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-config" (OuterVolumeSpecName: "config") pod "37d31419-eada-4b93-bc20-bac232ced058" (UID: "37d31419-eada-4b93-bc20-bac232ced058"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.245540 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-config-data" (OuterVolumeSpecName: "config-data") pod "e3349c2c-6f29-425d-9d25-b4f23821cfcc" (UID: "e3349c2c-6f29-425d-9d25-b4f23821cfcc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.252105 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "37d31419-eada-4b93-bc20-bac232ced058" (UID: "37d31419-eada-4b93-bc20-bac232ced058"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.253167 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "37d31419-eada-4b93-bc20-bac232ced058" (UID: "37d31419-eada-4b93-bc20-bac232ced058"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.271670 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.271707 4730 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.271716 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj56l\" (UniqueName: \"kubernetes.io/projected/e3349c2c-6f29-425d-9d25-b4f23821cfcc-kube-api-access-qj56l\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.271725 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x9cc\" (UniqueName: \"kubernetes.io/projected/37d31419-eada-4b93-bc20-bac232ced058-kube-api-access-6x9cc\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.271734 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.271742 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.271749 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.271758 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.271765 4730 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3349c2c-6f29-425d-9d25-b4f23821cfcc-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.271773 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.271780 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-config\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.271790 4730 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37d31419-eada-4b93-bc20-bac232ced058-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.493383 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.543034 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b821c271-d46c-4a68-a6ea-438e616c4d47" path="/var/lib/kubelet/pods/b821c271-d46c-4a68-a6ea-438e616c4d47/volumes" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.544282 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4458311-d050-4887-b4c7-6df6c993d66e" path="/var/lib/kubelet/pods/c4458311-d050-4887-b4c7-6df6c993d66e/volumes" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.696630 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rmtsq" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.697888 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rmtsq" event={"ID":"e3349c2c-6f29-425d-9d25-b4f23821cfcc","Type":"ContainerDied","Data":"707ec669900453b1a2293614ef0436d415043c661d842387b86130234173b70a"} Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.697934 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="707ec669900453b1a2293614ef0436d415043c661d842387b86130234173b70a" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.715816 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" event={"ID":"37d31419-eada-4b93-bc20-bac232ced058","Type":"ContainerDied","Data":"3bd944df856483bb32661f18779d6ead5d56c48a59d438569d6061308ace393b"} Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.715916 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59fc649cc7-tct2h" Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.743396 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59fc649cc7-tct2h"] Mar 20 16:00:13 crc kubenswrapper[4730]: I0320 16:00:13.756753 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59fc649cc7-tct2h"] Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.083427 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-rmtsq"] Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.092571 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-rmtsq"] Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.183852 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-4kfmn"] Mar 20 16:00:14 crc kubenswrapper[4730]: E0320 16:00:14.185108 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37d31419-eada-4b93-bc20-bac232ced058" containerName="dnsmasq-dns" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.185130 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d31419-eada-4b93-bc20-bac232ced058" containerName="dnsmasq-dns" Mar 20 16:00:14 crc kubenswrapper[4730]: E0320 16:00:14.185150 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37d31419-eada-4b93-bc20-bac232ced058" containerName="init" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.185158 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d31419-eada-4b93-bc20-bac232ced058" containerName="init" Mar 20 16:00:14 crc kubenswrapper[4730]: E0320 16:00:14.185195 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3349c2c-6f29-425d-9d25-b4f23821cfcc" containerName="keystone-bootstrap" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.185513 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3349c2c-6f29-425d-9d25-b4f23821cfcc" containerName="keystone-bootstrap" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.185732 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="37d31419-eada-4b93-bc20-bac232ced058" containerName="dnsmasq-dns" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.185757 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3349c2c-6f29-425d-9d25-b4f23821cfcc" containerName="keystone-bootstrap" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.188426 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4kfmn" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.192015 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4kfmn"] Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.192028 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.192378 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pjvk4" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.192498 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.192572 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.194566 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.291836 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-credential-keys\") pod \"keystone-bootstrap-4kfmn\" (UID: \"fedef548-ce31-47a2-92fc-911f167635f9\") " pod="openstack/keystone-bootstrap-4kfmn" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.292167 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-scripts\") pod \"keystone-bootstrap-4kfmn\" (UID: \"fedef548-ce31-47a2-92fc-911f167635f9\") " pod="openstack/keystone-bootstrap-4kfmn" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.292227 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-combined-ca-bundle\") pod \"keystone-bootstrap-4kfmn\" (UID: \"fedef548-ce31-47a2-92fc-911f167635f9\") " pod="openstack/keystone-bootstrap-4kfmn" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.292378 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-config-data\") pod \"keystone-bootstrap-4kfmn\" (UID: \"fedef548-ce31-47a2-92fc-911f167635f9\") " pod="openstack/keystone-bootstrap-4kfmn" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.292518 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-fernet-keys\") pod \"keystone-bootstrap-4kfmn\" (UID: \"fedef548-ce31-47a2-92fc-911f167635f9\") " pod="openstack/keystone-bootstrap-4kfmn" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.292593 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98kgr\" (UniqueName: \"kubernetes.io/projected/fedef548-ce31-47a2-92fc-911f167635f9-kube-api-access-98kgr\") pod \"keystone-bootstrap-4kfmn\" (UID: \"fedef548-ce31-47a2-92fc-911f167635f9\") " pod="openstack/keystone-bootstrap-4kfmn" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.394704 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-fernet-keys\") pod \"keystone-bootstrap-4kfmn\" (UID: \"fedef548-ce31-47a2-92fc-911f167635f9\") " pod="openstack/keystone-bootstrap-4kfmn" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.394749 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98kgr\" (UniqueName: \"kubernetes.io/projected/fedef548-ce31-47a2-92fc-911f167635f9-kube-api-access-98kgr\") pod \"keystone-bootstrap-4kfmn\" (UID: \"fedef548-ce31-47a2-92fc-911f167635f9\") " pod="openstack/keystone-bootstrap-4kfmn" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.394832 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-credential-keys\") pod \"keystone-bootstrap-4kfmn\" (UID: \"fedef548-ce31-47a2-92fc-911f167635f9\") " pod="openstack/keystone-bootstrap-4kfmn" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.394851 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-scripts\") pod \"keystone-bootstrap-4kfmn\" (UID: \"fedef548-ce31-47a2-92fc-911f167635f9\") " pod="openstack/keystone-bootstrap-4kfmn" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.394895 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-combined-ca-bundle\") pod \"keystone-bootstrap-4kfmn\" (UID: \"fedef548-ce31-47a2-92fc-911f167635f9\") " pod="openstack/keystone-bootstrap-4kfmn" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.394940 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-config-data\") pod \"keystone-bootstrap-4kfmn\" (UID: \"fedef548-ce31-47a2-92fc-911f167635f9\") " pod="openstack/keystone-bootstrap-4kfmn" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.402141 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-credential-keys\") pod \"keystone-bootstrap-4kfmn\" (UID: \"fedef548-ce31-47a2-92fc-911f167635f9\") " pod="openstack/keystone-bootstrap-4kfmn" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.403201 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-config-data\") pod \"keystone-bootstrap-4kfmn\" (UID: \"fedef548-ce31-47a2-92fc-911f167635f9\") " pod="openstack/keystone-bootstrap-4kfmn" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.405100 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-fernet-keys\") pod \"keystone-bootstrap-4kfmn\" (UID: \"fedef548-ce31-47a2-92fc-911f167635f9\") " pod="openstack/keystone-bootstrap-4kfmn" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.407912 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-combined-ca-bundle\") pod \"keystone-bootstrap-4kfmn\" (UID: \"fedef548-ce31-47a2-92fc-911f167635f9\") " pod="openstack/keystone-bootstrap-4kfmn" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.408477 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-scripts\") pod \"keystone-bootstrap-4kfmn\" (UID: \"fedef548-ce31-47a2-92fc-911f167635f9\") " pod="openstack/keystone-bootstrap-4kfmn" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.411353 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98kgr\" (UniqueName: \"kubernetes.io/projected/fedef548-ce31-47a2-92fc-911f167635f9-kube-api-access-98kgr\") pod \"keystone-bootstrap-4kfmn\" (UID: \"fedef548-ce31-47a2-92fc-911f167635f9\") " pod="openstack/keystone-bootstrap-4kfmn" Mar 20 16:00:14 crc kubenswrapper[4730]: I0320 16:00:14.512054 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4kfmn" Mar 20 16:00:14 crc kubenswrapper[4730]: E0320 16:00:14.966912 4730 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="859fbb7a55a48ffe4a6d03732d3cc6088c3d226367ded637f27bf24936c41dba" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 20 16:00:14 crc kubenswrapper[4730]: E0320 16:00:14.968497 4730 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="859fbb7a55a48ffe4a6d03732d3cc6088c3d226367ded637f27bf24936c41dba" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 20 16:00:14 crc kubenswrapper[4730]: E0320 16:00:14.969778 4730 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="859fbb7a55a48ffe4a6d03732d3cc6088c3d226367ded637f27bf24936c41dba" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 20 16:00:14 crc kubenswrapper[4730]: E0320 16:00:14.969841 4730 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="9f653a7b-251e-4eb2-92cd-74e23ac4dba5" containerName="watcher-applier" Mar 20 16:00:15 crc kubenswrapper[4730]: I0320 16:00:15.551903 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37d31419-eada-4b93-bc20-bac232ced058" path="/var/lib/kubelet/pods/37d31419-eada-4b93-bc20-bac232ced058/volumes" Mar 20 16:00:15 crc kubenswrapper[4730]: I0320 16:00:15.553428 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3349c2c-6f29-425d-9d25-b4f23821cfcc" path="/var/lib/kubelet/pods/e3349c2c-6f29-425d-9d25-b4f23821cfcc/volumes" Mar 20 16:00:17 crc kubenswrapper[4730]: I0320 16:00:17.755461 4730 generic.go:334] "Generic (PLEG): container finished" podID="9f653a7b-251e-4eb2-92cd-74e23ac4dba5" containerID="859fbb7a55a48ffe4a6d03732d3cc6088c3d226367ded637f27bf24936c41dba" exitCode=0 Mar 20 16:00:17 crc kubenswrapper[4730]: I0320 16:00:17.755775 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"9f653a7b-251e-4eb2-92cd-74e23ac4dba5","Type":"ContainerDied","Data":"859fbb7a55a48ffe4a6d03732d3cc6088c3d226367ded637f27bf24936c41dba"} Mar 20 16:00:19 crc kubenswrapper[4730]: E0320 16:00:19.965364 4730 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 859fbb7a55a48ffe4a6d03732d3cc6088c3d226367ded637f27bf24936c41dba is running failed: container process not found" containerID="859fbb7a55a48ffe4a6d03732d3cc6088c3d226367ded637f27bf24936c41dba" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 20 16:00:19 crc kubenswrapper[4730]: E0320 16:00:19.966111 4730 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 859fbb7a55a48ffe4a6d03732d3cc6088c3d226367ded637f27bf24936c41dba is running failed: container process not found" containerID="859fbb7a55a48ffe4a6d03732d3cc6088c3d226367ded637f27bf24936c41dba" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 20 16:00:19 crc kubenswrapper[4730]: E0320 16:00:19.966802 4730 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 859fbb7a55a48ffe4a6d03732d3cc6088c3d226367ded637f27bf24936c41dba is running failed: container process not found" containerID="859fbb7a55a48ffe4a6d03732d3cc6088c3d226367ded637f27bf24936c41dba" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 20 16:00:19 crc kubenswrapper[4730]: E0320 16:00:19.966843 4730 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 859fbb7a55a48ffe4a6d03732d3cc6088c3d226367ded637f27bf24936c41dba is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="9f653a7b-251e-4eb2-92cd-74e23ac4dba5" containerName="watcher-applier" Mar 20 16:00:20 crc kubenswrapper[4730]: I0320 16:00:20.699485 4730 scope.go:117] "RemoveContainer" containerID="5b6df27b1ba23b9df05a1fffaaba05b754c2fb2f90da0b4adb5478040787f576" Mar 20 16:00:21 crc kubenswrapper[4730]: I0320 16:00:21.808668 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"9f653a7b-251e-4eb2-92cd-74e23ac4dba5","Type":"ContainerDied","Data":"c21d7cc71db24a4080b26c4c7e24c638e9de88158fbe9899bd6305ba181030ef"} Mar 20 16:00:21 crc kubenswrapper[4730]: I0320 16:00:21.808866 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c21d7cc71db24a4080b26c4c7e24c638e9de88158fbe9899bd6305ba181030ef" Mar 20 16:00:21 crc kubenswrapper[4730]: I0320 16:00:21.893133 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 20 16:00:21 crc kubenswrapper[4730]: I0320 16:00:21.904103 4730 scope.go:117] "RemoveContainer" containerID="20e1876e1a0a25a88aae1c6629f93da770afea8c540f5629af87b68779bdafc7" Mar 20 16:00:22 crc kubenswrapper[4730]: I0320 16:00:22.037439 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f653a7b-251e-4eb2-92cd-74e23ac4dba5-logs\") pod \"9f653a7b-251e-4eb2-92cd-74e23ac4dba5\" (UID: \"9f653a7b-251e-4eb2-92cd-74e23ac4dba5\") " Mar 20 16:00:22 crc kubenswrapper[4730]: I0320 16:00:22.037492 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52bmh\" (UniqueName: \"kubernetes.io/projected/9f653a7b-251e-4eb2-92cd-74e23ac4dba5-kube-api-access-52bmh\") pod \"9f653a7b-251e-4eb2-92cd-74e23ac4dba5\" (UID: \"9f653a7b-251e-4eb2-92cd-74e23ac4dba5\") " Mar 20 16:00:22 crc kubenswrapper[4730]: I0320 16:00:22.037541 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f653a7b-251e-4eb2-92cd-74e23ac4dba5-config-data\") pod \"9f653a7b-251e-4eb2-92cd-74e23ac4dba5\" (UID: \"9f653a7b-251e-4eb2-92cd-74e23ac4dba5\") " Mar 20 16:00:22 crc kubenswrapper[4730]: I0320 16:00:22.037664 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f653a7b-251e-4eb2-92cd-74e23ac4dba5-combined-ca-bundle\") pod \"9f653a7b-251e-4eb2-92cd-74e23ac4dba5\" (UID: \"9f653a7b-251e-4eb2-92cd-74e23ac4dba5\") " Mar 20 16:00:22 crc kubenswrapper[4730]: I0320 16:00:22.038976 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f653a7b-251e-4eb2-92cd-74e23ac4dba5-logs" (OuterVolumeSpecName: "logs") pod "9f653a7b-251e-4eb2-92cd-74e23ac4dba5" (UID: "9f653a7b-251e-4eb2-92cd-74e23ac4dba5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:00:22 crc kubenswrapper[4730]: I0320 16:00:22.064510 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f653a7b-251e-4eb2-92cd-74e23ac4dba5-kube-api-access-52bmh" (OuterVolumeSpecName: "kube-api-access-52bmh") pod "9f653a7b-251e-4eb2-92cd-74e23ac4dba5" (UID: "9f653a7b-251e-4eb2-92cd-74e23ac4dba5"). InnerVolumeSpecName "kube-api-access-52bmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:00:22 crc kubenswrapper[4730]: I0320 16:00:22.096876 4730 scope.go:117] "RemoveContainer" containerID="bb5b7167e8f80b19b6b3c7bf7988748aafee8cd2702715c1020def5b6b2b9fb6" Mar 20 16:00:22 crc kubenswrapper[4730]: I0320 16:00:22.106449 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f653a7b-251e-4eb2-92cd-74e23ac4dba5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f653a7b-251e-4eb2-92cd-74e23ac4dba5" (UID: "9f653a7b-251e-4eb2-92cd-74e23ac4dba5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:22 crc kubenswrapper[4730]: I0320 16:00:22.140321 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f653a7b-251e-4eb2-92cd-74e23ac4dba5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:22 crc kubenswrapper[4730]: I0320 16:00:22.140353 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f653a7b-251e-4eb2-92cd-74e23ac4dba5-logs\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:22 crc kubenswrapper[4730]: I0320 16:00:22.140363 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52bmh\" (UniqueName: \"kubernetes.io/projected/9f653a7b-251e-4eb2-92cd-74e23ac4dba5-kube-api-access-52bmh\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:22 crc kubenswrapper[4730]: I0320 16:00:22.184105 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f653a7b-251e-4eb2-92cd-74e23ac4dba5-config-data" (OuterVolumeSpecName: "config-data") pod "9f653a7b-251e-4eb2-92cd-74e23ac4dba5" (UID: "9f653a7b-251e-4eb2-92cd-74e23ac4dba5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:22 crc kubenswrapper[4730]: I0320 16:00:22.184909 4730 scope.go:117] "RemoveContainer" containerID="44eee6d0807ce3fdd8bb3db86c31803cf1f646803d40fa88d8701a25be8c2aaa" Mar 20 16:00:22 crc kubenswrapper[4730]: I0320 16:00:22.242077 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f653a7b-251e-4eb2-92cd-74e23ac4dba5-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:22 crc kubenswrapper[4730]: I0320 16:00:22.599602 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 16:00:22 crc kubenswrapper[4730]: I0320 16:00:22.665954 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4kfmn"] Mar 20 16:00:22 crc kubenswrapper[4730]: I0320 16:00:22.853963 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-x2t9r" event={"ID":"a05675d7-cd2f-4810-862b-cb0d2d13cbdd","Type":"ContainerStarted","Data":"6f35041c9925accfe452d038ab9d3c1753f640407e6e4a51f0b4d6916cb04e6f"} Mar 20 16:00:22 crc kubenswrapper[4730]: I0320 16:00:22.857442 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567040-2zl4f" event={"ID":"97b63a10-b572-4a37-a2a4-079852aa2d3d","Type":"ContainerStarted","Data":"347fe11ee7c05acba952c1a21fa83ca176c9f921071221e9dbdf6170682cd003"} Mar 20 16:00:22 crc kubenswrapper[4730]: I0320 16:00:22.861043 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4kfmn" event={"ID":"fedef548-ce31-47a2-92fc-911f167635f9","Type":"ContainerStarted","Data":"23a1410560f5a07b5a54b48e6e05506048eb34cae5e9bbf86d9f30143ef5d1b0"} Mar 20 16:00:22 crc kubenswrapper[4730]: I0320 16:00:22.886837 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-x2t9r" podStartSLOduration=12.375509646 podStartE2EDuration="23.886823039s" podCreationTimestamp="2026-03-20 15:59:59 +0000 UTC" firstStartedPulling="2026-03-20 16:00:01.462055347 +0000 UTC m=+1260.675426716" lastFinishedPulling="2026-03-20 16:00:12.97336874 +0000 UTC m=+1272.186740109" observedRunningTime="2026-03-20 16:00:22.870920743 +0000 UTC m=+1282.084292132" watchObservedRunningTime="2026-03-20 16:00:22.886823039 +0000 UTC m=+1282.100194408" Mar 20 16:00:22 crc kubenswrapper[4730]: I0320 16:00:22.887421 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29567040-2zl4f" podStartSLOduration=2.577543313 podStartE2EDuration="22.887415542s" podCreationTimestamp="2026-03-20 16:00:00 +0000 UTC" firstStartedPulling="2026-03-20 16:00:01.625324578 +0000 UTC m=+1260.838695947" lastFinishedPulling="2026-03-20 16:00:21.935196807 +0000 UTC m=+1281.148568176" observedRunningTime="2026-03-20 16:00:22.884698501 +0000 UTC m=+1282.098069870" watchObservedRunningTime="2026-03-20 16:00:22.887415542 +0000 UTC m=+1282.100786911" Mar 20 16:00:22 crc kubenswrapper[4730]: I0320 16:00:22.895486 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-tz6x7" event={"ID":"48fc8af0-e30f-4f3f-88d3-8b054c6359ef","Type":"ContainerStarted","Data":"59a0ed1595de1b0849599bb5a7c10e7cfbb46ad061c13c2ab2d12fc1bc355373"} Mar 20 16:00:22 crc kubenswrapper[4730]: I0320 16:00:22.900495 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"14cdd4b7-7a81-469f-ae2f-104b054cc583","Type":"ContainerStarted","Data":"656641958e552f93316749e193bcc1be09eafd97508856daa5fede226eb204fa"} Mar 20 16:00:22 crc kubenswrapper[4730]: I0320 16:00:22.906918 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"223c97f9-0680-47b8-bc2e-1c914296d29e","Type":"ContainerStarted","Data":"d6846337441238e0631dc47666643f2d85b6c9e548d29144f9972ff195d4dc1e"} Mar 20 16:00:22 crc kubenswrapper[4730]: I0320 16:00:22.912484 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-tz6x7" podStartSLOduration=3.545593208 podStartE2EDuration="23.912446142s" podCreationTimestamp="2026-03-20 15:59:59 +0000 UTC" firstStartedPulling="2026-03-20 16:00:01.460581194 +0000 UTC m=+1260.673952563" lastFinishedPulling="2026-03-20 16:00:21.827434128 +0000 UTC m=+1281.040805497" observedRunningTime="2026-03-20 16:00:22.912286028 +0000 UTC m=+1282.125657417" watchObservedRunningTime="2026-03-20 16:00:22.912446142 +0000 UTC m=+1282.125817511" Mar 20 16:00:22 crc kubenswrapper[4730]: I0320 16:00:22.928555 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 20 16:00:22 crc kubenswrapper[4730]: I0320 16:00:22.928621 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3f6c808e-d523-48bd-8ec2-28b625834317","Type":"ContainerStarted","Data":"834cc7775c739ec615e04b1c22eba8f136c36cff5d344d3613f2797565551c85"} Mar 20 16:00:22 crc kubenswrapper[4730]: I0320 16:00:22.985941 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.015035 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.024618 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Mar 20 16:00:23 crc kubenswrapper[4730]: E0320 16:00:23.025475 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f653a7b-251e-4eb2-92cd-74e23ac4dba5" containerName="watcher-applier" Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.025496 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f653a7b-251e-4eb2-92cd-74e23ac4dba5" containerName="watcher-applier" Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.025718 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f653a7b-251e-4eb2-92cd-74e23ac4dba5" containerName="watcher-applier" Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.026863 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.030006 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.046384 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.175108 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0\") " pod="openstack/watcher-applier-0" Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.175176 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0-config-data\") pod \"watcher-applier-0\" (UID: \"5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0\") " pod="openstack/watcher-applier-0" Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.175208 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djjz2\" (UniqueName: \"kubernetes.io/projected/5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0-kube-api-access-djjz2\") pod \"watcher-applier-0\" (UID: \"5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0\") " pod="openstack/watcher-applier-0" Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.175227 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0-logs\") pod \"watcher-applier-0\" (UID: \"5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0\") " pod="openstack/watcher-applier-0" Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.276820 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0\") " pod="openstack/watcher-applier-0" Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.276909 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0-config-data\") pod \"watcher-applier-0\" (UID: \"5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0\") " pod="openstack/watcher-applier-0" Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.276950 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djjz2\" (UniqueName: \"kubernetes.io/projected/5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0-kube-api-access-djjz2\") pod \"watcher-applier-0\" (UID: \"5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0\") " pod="openstack/watcher-applier-0" Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.276974 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0-logs\") pod \"watcher-applier-0\" (UID: \"5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0\") " pod="openstack/watcher-applier-0" Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.277588 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0-logs\") pod \"watcher-applier-0\" (UID: \"5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0\") " pod="openstack/watcher-applier-0" Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.307520 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0\") " pod="openstack/watcher-applier-0" Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.307581 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0-config-data\") pod \"watcher-applier-0\" (UID: \"5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0\") " pod="openstack/watcher-applier-0" Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.315417 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djjz2\" (UniqueName: \"kubernetes.io/projected/5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0-kube-api-access-djjz2\") pod \"watcher-applier-0\" (UID: \"5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0\") " pod="openstack/watcher-applier-0" Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.403385 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.523286 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.551557 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f653a7b-251e-4eb2-92cd-74e23ac4dba5" path="/var/lib/kubelet/pods/9f653a7b-251e-4eb2-92cd-74e23ac4dba5/volumes" Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.951344 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4kfmn" event={"ID":"fedef548-ce31-47a2-92fc-911f167635f9","Type":"ContainerStarted","Data":"2cbf92580c54611c192a57c093a66c2f77a3a73726fc9a21c3aef24b4e922f95"} Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.953895 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"82401c9f-f5f9-4bc6-a085-c89d3632493e","Type":"ContainerStarted","Data":"81ccba513b601e62e75f217051576c1c723201231f3534205e6403a320c44aa9"} Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.957405 4730 generic.go:334] "Generic (PLEG): container finished" podID="97b63a10-b572-4a37-a2a4-079852aa2d3d" containerID="347fe11ee7c05acba952c1a21fa83ca176c9f921071221e9dbdf6170682cd003" exitCode=0 Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.957573 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567040-2zl4f" event={"ID":"97b63a10-b572-4a37-a2a4-079852aa2d3d","Type":"ContainerDied","Data":"347fe11ee7c05acba952c1a21fa83ca176c9f921071221e9dbdf6170682cd003"} Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.959571 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"14cdd4b7-7a81-469f-ae2f-104b054cc583","Type":"ContainerStarted","Data":"1eef3a4c57a302c49282046a25ce9b6b686742d1068d8539a0c4f898222c31dc"} Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.961348 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hbplf" event={"ID":"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd","Type":"ContainerStarted","Data":"1763f714611816ce76b822616e2726ee2af2ec1d061896faecc0edc07186595f"} Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.972159 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-4kfmn" podStartSLOduration=9.972139 podStartE2EDuration="9.972139s" podCreationTimestamp="2026-03-20 16:00:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:00:23.969576513 +0000 UTC m=+1283.182947902" watchObservedRunningTime="2026-03-20 16:00:23.972139 +0000 UTC m=+1283.185510369" Mar 20 16:00:23 crc kubenswrapper[4730]: I0320 16:00:23.992800 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-hbplf" podStartSLOduration=4.429086767 podStartE2EDuration="24.992782942s" podCreationTimestamp="2026-03-20 15:59:59 +0000 UTC" firstStartedPulling="2026-03-20 16:00:01.479049247 +0000 UTC m=+1260.692420616" lastFinishedPulling="2026-03-20 16:00:22.042745422 +0000 UTC m=+1281.256116791" observedRunningTime="2026-03-20 16:00:23.988700501 +0000 UTC m=+1283.202071870" watchObservedRunningTime="2026-03-20 16:00:23.992782942 +0000 UTC m=+1283.206154311" Mar 20 16:00:24 crc kubenswrapper[4730]: I0320 16:00:24.100856 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Mar 20 16:00:24 crc kubenswrapper[4730]: W0320 16:00:24.170483 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bd7fdc7_f9da_4f44_98d3_7b86f541b9f0.slice/crio-505e09709aeaef8a87d638031cf49a2f3e2fbdc9ec33e31ca6d7f0d3b9378532 WatchSource:0}: Error finding container 505e09709aeaef8a87d638031cf49a2f3e2fbdc9ec33e31ca6d7f0d3b9378532: Status 404 returned error can't find the container with id 505e09709aeaef8a87d638031cf49a2f3e2fbdc9ec33e31ca6d7f0d3b9378532 Mar 20 16:00:25 crc kubenswrapper[4730]: I0320 16:00:25.003626 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"82401c9f-f5f9-4bc6-a085-c89d3632493e","Type":"ContainerStarted","Data":"d342f4f373c3a46fb291e5160cd15525a5be1018e68f010945cfba9de11fd3fe"} Mar 20 16:00:25 crc kubenswrapper[4730]: I0320 16:00:25.007040 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0","Type":"ContainerStarted","Data":"06e4300ad2aa0cb045945f92073482ca78016e812eb685af49fb195ce150c681"} Mar 20 16:00:25 crc kubenswrapper[4730]: I0320 16:00:25.007077 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0","Type":"ContainerStarted","Data":"505e09709aeaef8a87d638031cf49a2f3e2fbdc9ec33e31ca6d7f0d3b9378532"} Mar 20 16:00:25 crc kubenswrapper[4730]: I0320 16:00:25.018090 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"14cdd4b7-7a81-469f-ae2f-104b054cc583","Type":"ContainerStarted","Data":"8f1291a8157b5fac6e0adc6431d157cb584ae26777485369c5a717b5d22da62c"} Mar 20 16:00:25 crc kubenswrapper[4730]: I0320 16:00:25.033996 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"223c97f9-0680-47b8-bc2e-1c914296d29e","Type":"ContainerStarted","Data":"f29767ca3e9d6cdef0508a609251241ac12e93823a37b6304da4f130070ee420"} Mar 20 16:00:25 crc kubenswrapper[4730]: I0320 16:00:25.036457 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=3.036410681 podStartE2EDuration="3.036410681s" podCreationTimestamp="2026-03-20 16:00:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:00:25.030574911 +0000 UTC m=+1284.243946290" watchObservedRunningTime="2026-03-20 16:00:25.036410681 +0000 UTC m=+1284.249782060" Mar 20 16:00:25 crc kubenswrapper[4730]: I0320 16:00:25.064752 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=13.064730975 podStartE2EDuration="13.064730975s" podCreationTimestamp="2026-03-20 16:00:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:00:25.060294535 +0000 UTC m=+1284.273665924" watchObservedRunningTime="2026-03-20 16:00:25.064730975 +0000 UTC m=+1284.278102344" Mar 20 16:00:25 crc kubenswrapper[4730]: I0320 16:00:25.324777 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567040-2zl4f" Mar 20 16:00:25 crc kubenswrapper[4730]: I0320 16:00:25.425457 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-459cr\" (UniqueName: \"kubernetes.io/projected/97b63a10-b572-4a37-a2a4-079852aa2d3d-kube-api-access-459cr\") pod \"97b63a10-b572-4a37-a2a4-079852aa2d3d\" (UID: \"97b63a10-b572-4a37-a2a4-079852aa2d3d\") " Mar 20 16:00:25 crc kubenswrapper[4730]: I0320 16:00:25.435477 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97b63a10-b572-4a37-a2a4-079852aa2d3d-kube-api-access-459cr" (OuterVolumeSpecName: "kube-api-access-459cr") pod "97b63a10-b572-4a37-a2a4-079852aa2d3d" (UID: "97b63a10-b572-4a37-a2a4-079852aa2d3d"). InnerVolumeSpecName "kube-api-access-459cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:00:25 crc kubenswrapper[4730]: I0320 16:00:25.528309 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-459cr\" (UniqueName: \"kubernetes.io/projected/97b63a10-b572-4a37-a2a4-079852aa2d3d-kube-api-access-459cr\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:25 crc kubenswrapper[4730]: I0320 16:00:25.957908 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567034-sdvfb"] Mar 20 16:00:25 crc kubenswrapper[4730]: I0320 16:00:25.966151 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567034-sdvfb"] Mar 20 16:00:26 crc kubenswrapper[4730]: I0320 16:00:26.049435 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567040-2zl4f" Mar 20 16:00:26 crc kubenswrapper[4730]: I0320 16:00:26.049476 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567040-2zl4f" event={"ID":"97b63a10-b572-4a37-a2a4-079852aa2d3d","Type":"ContainerDied","Data":"5160551b06a1e1f1e7b6689c4dec94a675729bc0e9edd00f1bb67e7f9a23750a"} Mar 20 16:00:26 crc kubenswrapper[4730]: I0320 16:00:26.049519 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5160551b06a1e1f1e7b6689c4dec94a675729bc0e9edd00f1bb67e7f9a23750a" Mar 20 16:00:27 crc kubenswrapper[4730]: I0320 16:00:27.548663 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c84a0097-0ea0-4397-b72b-07e391268b84" path="/var/lib/kubelet/pods/c84a0097-0ea0-4397-b72b-07e391268b84/volumes" Mar 20 16:00:28 crc kubenswrapper[4730]: I0320 16:00:28.403640 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Mar 20 16:00:29 crc kubenswrapper[4730]: I0320 16:00:29.078302 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"82401c9f-f5f9-4bc6-a085-c89d3632493e","Type":"ContainerStarted","Data":"68b0b2752749a64e7ce292cfa6aabcc6400dcc4552e165b090d093ce63fe5a35"} Mar 20 16:00:29 crc kubenswrapper[4730]: I0320 16:00:29.101640 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=17.101608592 podStartE2EDuration="17.101608592s" podCreationTimestamp="2026-03-20 16:00:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:00:29.100663001 +0000 UTC m=+1288.314034390" watchObservedRunningTime="2026-03-20 16:00:29.101608592 +0000 UTC m=+1288.314979961" Mar 20 16:00:29 crc kubenswrapper[4730]: I0320 16:00:29.974531 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 20 16:00:29 crc kubenswrapper[4730]: I0320 16:00:29.974835 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Mar 20 16:00:29 crc kubenswrapper[4730]: E0320 16:00:29.976155 4730 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 834cc7775c739ec615e04b1c22eba8f136c36cff5d344d3613f2797565551c85 is running failed: container process not found" containerID="834cc7775c739ec615e04b1c22eba8f136c36cff5d344d3613f2797565551c85" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Mar 20 16:00:29 crc kubenswrapper[4730]: E0320 16:00:29.976530 4730 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 834cc7775c739ec615e04b1c22eba8f136c36cff5d344d3613f2797565551c85 is running failed: container process not found" containerID="834cc7775c739ec615e04b1c22eba8f136c36cff5d344d3613f2797565551c85" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Mar 20 16:00:29 crc kubenswrapper[4730]: E0320 16:00:29.976711 4730 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 834cc7775c739ec615e04b1c22eba8f136c36cff5d344d3613f2797565551c85 is running failed: container process not found" containerID="834cc7775c739ec615e04b1c22eba8f136c36cff5d344d3613f2797565551c85" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Mar 20 16:00:29 crc kubenswrapper[4730]: E0320 16:00:29.976736 4730 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 834cc7775c739ec615e04b1c22eba8f136c36cff5d344d3613f2797565551c85 is running failed: container process not found" probeType="Startup" pod="openstack/watcher-decision-engine-0" podUID="3f6c808e-d523-48bd-8ec2-28b625834317" containerName="watcher-decision-engine" Mar 20 16:00:30 crc kubenswrapper[4730]: I0320 16:00:30.087962 4730 generic.go:334] "Generic (PLEG): container finished" podID="fedef548-ce31-47a2-92fc-911f167635f9" containerID="2cbf92580c54611c192a57c093a66c2f77a3a73726fc9a21c3aef24b4e922f95" exitCode=0 Mar 20 16:00:30 crc kubenswrapper[4730]: I0320 16:00:30.088039 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4kfmn" event={"ID":"fedef548-ce31-47a2-92fc-911f167635f9","Type":"ContainerDied","Data":"2cbf92580c54611c192a57c093a66c2f77a3a73726fc9a21c3aef24b4e922f95"} Mar 20 16:00:30 crc kubenswrapper[4730]: I0320 16:00:30.090895 4730 generic.go:334] "Generic (PLEG): container finished" podID="3f6c808e-d523-48bd-8ec2-28b625834317" containerID="834cc7775c739ec615e04b1c22eba8f136c36cff5d344d3613f2797565551c85" exitCode=1 Mar 20 16:00:30 crc kubenswrapper[4730]: I0320 16:00:30.090926 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3f6c808e-d523-48bd-8ec2-28b625834317","Type":"ContainerDied","Data":"834cc7775c739ec615e04b1c22eba8f136c36cff5d344d3613f2797565551c85"} Mar 20 16:00:30 crc kubenswrapper[4730]: I0320 16:00:30.090997 4730 scope.go:117] "RemoveContainer" containerID="c057d3a2f6ef1e71f3dc2bc7abb9c07a0dfab9e5b78bf4eb4546276a24c2d109" Mar 20 16:00:30 crc kubenswrapper[4730]: I0320 16:00:30.091687 4730 scope.go:117] "RemoveContainer" containerID="834cc7775c739ec615e04b1c22eba8f136c36cff5d344d3613f2797565551c85" Mar 20 16:00:30 crc kubenswrapper[4730]: E0320 16:00:30.091928 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(3f6c808e-d523-48bd-8ec2-28b625834317)\"" pod="openstack/watcher-decision-engine-0" podUID="3f6c808e-d523-48bd-8ec2-28b625834317" Mar 20 16:00:31 crc kubenswrapper[4730]: I0320 16:00:31.453276 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4kfmn" Mar 20 16:00:31 crc kubenswrapper[4730]: I0320 16:00:31.540510 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98kgr\" (UniqueName: \"kubernetes.io/projected/fedef548-ce31-47a2-92fc-911f167635f9-kube-api-access-98kgr\") pod \"fedef548-ce31-47a2-92fc-911f167635f9\" (UID: \"fedef548-ce31-47a2-92fc-911f167635f9\") " Mar 20 16:00:31 crc kubenswrapper[4730]: I0320 16:00:31.540628 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-combined-ca-bundle\") pod \"fedef548-ce31-47a2-92fc-911f167635f9\" (UID: \"fedef548-ce31-47a2-92fc-911f167635f9\") " Mar 20 16:00:31 crc kubenswrapper[4730]: I0320 16:00:31.540658 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-fernet-keys\") pod \"fedef548-ce31-47a2-92fc-911f167635f9\" (UID: \"fedef548-ce31-47a2-92fc-911f167635f9\") " Mar 20 16:00:31 crc kubenswrapper[4730]: I0320 16:00:31.540753 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-config-data\") pod \"fedef548-ce31-47a2-92fc-911f167635f9\" (UID: \"fedef548-ce31-47a2-92fc-911f167635f9\") " Mar 20 16:00:31 crc kubenswrapper[4730]: I0320 16:00:31.540935 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-scripts\") pod \"fedef548-ce31-47a2-92fc-911f167635f9\" (UID: \"fedef548-ce31-47a2-92fc-911f167635f9\") " Mar 20 16:00:31 crc kubenswrapper[4730]: I0320 16:00:31.540974 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-credential-keys\") pod \"fedef548-ce31-47a2-92fc-911f167635f9\" (UID: \"fedef548-ce31-47a2-92fc-911f167635f9\") " Mar 20 16:00:31 crc kubenswrapper[4730]: I0320 16:00:31.546077 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-scripts" (OuterVolumeSpecName: "scripts") pod "fedef548-ce31-47a2-92fc-911f167635f9" (UID: "fedef548-ce31-47a2-92fc-911f167635f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:31 crc kubenswrapper[4730]: I0320 16:00:31.546480 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "fedef548-ce31-47a2-92fc-911f167635f9" (UID: "fedef548-ce31-47a2-92fc-911f167635f9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:31 crc kubenswrapper[4730]: I0320 16:00:31.546732 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fedef548-ce31-47a2-92fc-911f167635f9-kube-api-access-98kgr" (OuterVolumeSpecName: "kube-api-access-98kgr") pod "fedef548-ce31-47a2-92fc-911f167635f9" (UID: "fedef548-ce31-47a2-92fc-911f167635f9"). InnerVolumeSpecName "kube-api-access-98kgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:00:31 crc kubenswrapper[4730]: I0320 16:00:31.548321 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fedef548-ce31-47a2-92fc-911f167635f9" (UID: "fedef548-ce31-47a2-92fc-911f167635f9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:31 crc kubenswrapper[4730]: I0320 16:00:31.567707 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-config-data" (OuterVolumeSpecName: "config-data") pod "fedef548-ce31-47a2-92fc-911f167635f9" (UID: "fedef548-ce31-47a2-92fc-911f167635f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:31 crc kubenswrapper[4730]: I0320 16:00:31.569893 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fedef548-ce31-47a2-92fc-911f167635f9" (UID: "fedef548-ce31-47a2-92fc-911f167635f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:31 crc kubenswrapper[4730]: I0320 16:00:31.643986 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:31 crc kubenswrapper[4730]: I0320 16:00:31.644012 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:31 crc kubenswrapper[4730]: I0320 16:00:31.644022 4730 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:31 crc kubenswrapper[4730]: I0320 16:00:31.644034 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98kgr\" (UniqueName: \"kubernetes.io/projected/fedef548-ce31-47a2-92fc-911f167635f9-kube-api-access-98kgr\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:31 crc kubenswrapper[4730]: I0320 16:00:31.644043 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:31 crc kubenswrapper[4730]: I0320 16:00:31.644051 4730 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fedef548-ce31-47a2-92fc-911f167635f9-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.114870 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4kfmn" event={"ID":"fedef548-ce31-47a2-92fc-911f167635f9","Type":"ContainerDied","Data":"23a1410560f5a07b5a54b48e6e05506048eb34cae5e9bbf86d9f30143ef5d1b0"} Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.114910 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23a1410560f5a07b5a54b48e6e05506048eb34cae5e9bbf86d9f30143ef5d1b0" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.114884 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4kfmn" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.117822 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"223c97f9-0680-47b8-bc2e-1c914296d29e","Type":"ContainerStarted","Data":"71df4b6d20c02608180532f04e26895e3f9ad0248e3128f02a3d2c457f5baa48"} Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.119603 4730 generic.go:334] "Generic (PLEG): container finished" podID="a05675d7-cd2f-4810-862b-cb0d2d13cbdd" containerID="6f35041c9925accfe452d038ab9d3c1753f640407e6e4a51f0b4d6916cb04e6f" exitCode=0 Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.119660 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-x2t9r" event={"ID":"a05675d7-cd2f-4810-862b-cb0d2d13cbdd","Type":"ContainerDied","Data":"6f35041c9925accfe452d038ab9d3c1753f640407e6e4a51f0b4d6916cb04e6f"} Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.246759 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6fb7949f77-2l9t7"] Mar 20 16:00:32 crc kubenswrapper[4730]: E0320 16:00:32.247258 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fedef548-ce31-47a2-92fc-911f167635f9" containerName="keystone-bootstrap" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.247278 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="fedef548-ce31-47a2-92fc-911f167635f9" containerName="keystone-bootstrap" Mar 20 16:00:32 crc kubenswrapper[4730]: E0320 16:00:32.247305 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97b63a10-b572-4a37-a2a4-079852aa2d3d" containerName="oc" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.247313 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="97b63a10-b572-4a37-a2a4-079852aa2d3d" containerName="oc" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.247580 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="fedef548-ce31-47a2-92fc-911f167635f9" containerName="keystone-bootstrap" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.247605 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="97b63a10-b572-4a37-a2a4-079852aa2d3d" containerName="oc" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.248344 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6fb7949f77-2l9t7" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.255106 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.255341 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.255366 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.255552 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.255700 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pjvk4" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.255549 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.270153 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6fb7949f77-2l9t7"] Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.360268 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d-config-data\") pod \"keystone-6fb7949f77-2l9t7\" (UID: \"e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d\") " pod="openstack/keystone-6fb7949f77-2l9t7" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.360318 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d-fernet-keys\") pod \"keystone-6fb7949f77-2l9t7\" (UID: \"e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d\") " pod="openstack/keystone-6fb7949f77-2l9t7" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.360367 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d-public-tls-certs\") pod \"keystone-6fb7949f77-2l9t7\" (UID: \"e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d\") " pod="openstack/keystone-6fb7949f77-2l9t7" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.360409 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d-scripts\") pod \"keystone-6fb7949f77-2l9t7\" (UID: \"e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d\") " pod="openstack/keystone-6fb7949f77-2l9t7" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.360447 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d-internal-tls-certs\") pod \"keystone-6fb7949f77-2l9t7\" (UID: \"e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d\") " pod="openstack/keystone-6fb7949f77-2l9t7" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.360482 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d-credential-keys\") pod \"keystone-6fb7949f77-2l9t7\" (UID: \"e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d\") " pod="openstack/keystone-6fb7949f77-2l9t7" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.360515 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz5lz\" (UniqueName: \"kubernetes.io/projected/e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d-kube-api-access-mz5lz\") pod \"keystone-6fb7949f77-2l9t7\" (UID: \"e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d\") " pod="openstack/keystone-6fb7949f77-2l9t7" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.360540 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d-combined-ca-bundle\") pod \"keystone-6fb7949f77-2l9t7\" (UID: \"e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d\") " pod="openstack/keystone-6fb7949f77-2l9t7" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.461995 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d-combined-ca-bundle\") pod \"keystone-6fb7949f77-2l9t7\" (UID: \"e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d\") " pod="openstack/keystone-6fb7949f77-2l9t7" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.462463 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d-config-data\") pod \"keystone-6fb7949f77-2l9t7\" (UID: \"e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d\") " pod="openstack/keystone-6fb7949f77-2l9t7" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.462499 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d-fernet-keys\") pod \"keystone-6fb7949f77-2l9t7\" (UID: \"e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d\") " pod="openstack/keystone-6fb7949f77-2l9t7" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.462573 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d-public-tls-certs\") pod \"keystone-6fb7949f77-2l9t7\" (UID: \"e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d\") " pod="openstack/keystone-6fb7949f77-2l9t7" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.462631 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d-scripts\") pod \"keystone-6fb7949f77-2l9t7\" (UID: \"e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d\") " pod="openstack/keystone-6fb7949f77-2l9t7" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.462689 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d-internal-tls-certs\") pod \"keystone-6fb7949f77-2l9t7\" (UID: \"e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d\") " pod="openstack/keystone-6fb7949f77-2l9t7" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.462738 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d-credential-keys\") pod \"keystone-6fb7949f77-2l9t7\" (UID: \"e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d\") " pod="openstack/keystone-6fb7949f77-2l9t7" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.462807 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz5lz\" (UniqueName: \"kubernetes.io/projected/e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d-kube-api-access-mz5lz\") pod \"keystone-6fb7949f77-2l9t7\" (UID: \"e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d\") " pod="openstack/keystone-6fb7949f77-2l9t7" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.466332 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d-config-data\") pod \"keystone-6fb7949f77-2l9t7\" (UID: \"e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d\") " pod="openstack/keystone-6fb7949f77-2l9t7" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.466848 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d-public-tls-certs\") pod \"keystone-6fb7949f77-2l9t7\" (UID: \"e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d\") " pod="openstack/keystone-6fb7949f77-2l9t7" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.467003 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d-scripts\") pod \"keystone-6fb7949f77-2l9t7\" (UID: \"e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d\") " pod="openstack/keystone-6fb7949f77-2l9t7" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.467113 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d-fernet-keys\") pod \"keystone-6fb7949f77-2l9t7\" (UID: \"e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d\") " pod="openstack/keystone-6fb7949f77-2l9t7" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.467168 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d-credential-keys\") pod \"keystone-6fb7949f77-2l9t7\" (UID: \"e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d\") " pod="openstack/keystone-6fb7949f77-2l9t7" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.467527 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d-combined-ca-bundle\") pod \"keystone-6fb7949f77-2l9t7\" (UID: \"e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d\") " pod="openstack/keystone-6fb7949f77-2l9t7" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.478702 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d-internal-tls-certs\") pod \"keystone-6fb7949f77-2l9t7\" (UID: \"e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d\") " pod="openstack/keystone-6fb7949f77-2l9t7" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.485008 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz5lz\" (UniqueName: \"kubernetes.io/projected/e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d-kube-api-access-mz5lz\") pod \"keystone-6fb7949f77-2l9t7\" (UID: \"e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d\") " pod="openstack/keystone-6fb7949f77-2l9t7" Mar 20 16:00:32 crc kubenswrapper[4730]: I0320 16:00:32.592241 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6fb7949f77-2l9t7" Mar 20 16:00:33 crc kubenswrapper[4730]: I0320 16:00:33.056080 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6fb7949f77-2l9t7"] Mar 20 16:00:33 crc kubenswrapper[4730]: W0320 16:00:33.059776 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2b9f0c5_80cc_4a4c_bbd8_c70cda9d5d3d.slice/crio-992a0a8a715f735a908f7e67a9aa86874d7aa020108d6ed3d3f2b9de33ca1de4 WatchSource:0}: Error finding container 992a0a8a715f735a908f7e67a9aa86874d7aa020108d6ed3d3f2b9de33ca1de4: Status 404 returned error can't find the container with id 992a0a8a715f735a908f7e67a9aa86874d7aa020108d6ed3d3f2b9de33ca1de4 Mar 20 16:00:33 crc kubenswrapper[4730]: I0320 16:00:33.128303 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6fb7949f77-2l9t7" event={"ID":"e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d","Type":"ContainerStarted","Data":"992a0a8a715f735a908f7e67a9aa86874d7aa020108d6ed3d3f2b9de33ca1de4"} Mar 20 16:00:33 crc kubenswrapper[4730]: I0320 16:00:33.175570 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 20 16:00:33 crc kubenswrapper[4730]: I0320 16:00:33.175622 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 20 16:00:33 crc kubenswrapper[4730]: I0320 16:00:33.225686 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 20 16:00:33 crc kubenswrapper[4730]: I0320 16:00:33.271643 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 20 16:00:33 crc kubenswrapper[4730]: I0320 16:00:33.403738 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Mar 20 16:00:33 crc kubenswrapper[4730]: I0320 16:00:33.429467 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Mar 20 16:00:33 crc kubenswrapper[4730]: I0320 16:00:33.494593 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 20 16:00:33 crc kubenswrapper[4730]: I0320 16:00:33.494647 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 20 16:00:33 crc kubenswrapper[4730]: I0320 16:00:33.516698 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-x2t9r" Mar 20 16:00:33 crc kubenswrapper[4730]: I0320 16:00:33.524393 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 20 16:00:33 crc kubenswrapper[4730]: I0320 16:00:33.547349 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 20 16:00:33 crc kubenswrapper[4730]: I0320 16:00:33.582038 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-config-data\") pod \"a05675d7-cd2f-4810-862b-cb0d2d13cbdd\" (UID: \"a05675d7-cd2f-4810-862b-cb0d2d13cbdd\") " Mar 20 16:00:33 crc kubenswrapper[4730]: I0320 16:00:33.582102 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-logs\") pod \"a05675d7-cd2f-4810-862b-cb0d2d13cbdd\" (UID: \"a05675d7-cd2f-4810-862b-cb0d2d13cbdd\") " Mar 20 16:00:33 crc kubenswrapper[4730]: I0320 16:00:33.582299 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxgg6\" (UniqueName: \"kubernetes.io/projected/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-kube-api-access-xxgg6\") pod \"a05675d7-cd2f-4810-862b-cb0d2d13cbdd\" (UID: \"a05675d7-cd2f-4810-862b-cb0d2d13cbdd\") " Mar 20 16:00:33 crc kubenswrapper[4730]: I0320 16:00:33.582609 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-logs" (OuterVolumeSpecName: "logs") pod "a05675d7-cd2f-4810-862b-cb0d2d13cbdd" (UID: "a05675d7-cd2f-4810-862b-cb0d2d13cbdd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:00:33 crc kubenswrapper[4730]: I0320 16:00:33.582945 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-combined-ca-bundle\") pod \"a05675d7-cd2f-4810-862b-cb0d2d13cbdd\" (UID: \"a05675d7-cd2f-4810-862b-cb0d2d13cbdd\") " Mar 20 16:00:33 crc kubenswrapper[4730]: I0320 16:00:33.583074 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-scripts\") pod \"a05675d7-cd2f-4810-862b-cb0d2d13cbdd\" (UID: \"a05675d7-cd2f-4810-862b-cb0d2d13cbdd\") " Mar 20 16:00:33 crc kubenswrapper[4730]: I0320 16:00:33.583599 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-logs\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:33 crc kubenswrapper[4730]: I0320 16:00:33.588379 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-scripts" (OuterVolumeSpecName: "scripts") pod "a05675d7-cd2f-4810-862b-cb0d2d13cbdd" (UID: "a05675d7-cd2f-4810-862b-cb0d2d13cbdd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:33 crc kubenswrapper[4730]: I0320 16:00:33.605241 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-kube-api-access-xxgg6" (OuterVolumeSpecName: "kube-api-access-xxgg6") pod "a05675d7-cd2f-4810-862b-cb0d2d13cbdd" (UID: "a05675d7-cd2f-4810-862b-cb0d2d13cbdd"). InnerVolumeSpecName "kube-api-access-xxgg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:00:33 crc kubenswrapper[4730]: E0320 16:00:33.627139 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-combined-ca-bundle podName:a05675d7-cd2f-4810-862b-cb0d2d13cbdd nodeName:}" failed. No retries permitted until 2026-03-20 16:00:34.127110508 +0000 UTC m=+1293.340481877 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-combined-ca-bundle") pod "a05675d7-cd2f-4810-862b-cb0d2d13cbdd" (UID: "a05675d7-cd2f-4810-862b-cb0d2d13cbdd") : error deleting /var/lib/kubelet/pods/a05675d7-cd2f-4810-862b-cb0d2d13cbdd/volume-subpaths: remove /var/lib/kubelet/pods/a05675d7-cd2f-4810-862b-cb0d2d13cbdd/volume-subpaths: no such file or directory Mar 20 16:00:33 crc kubenswrapper[4730]: I0320 16:00:33.629821 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-config-data" (OuterVolumeSpecName: "config-data") pod "a05675d7-cd2f-4810-862b-cb0d2d13cbdd" (UID: "a05675d7-cd2f-4810-862b-cb0d2d13cbdd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:33 crc kubenswrapper[4730]: I0320 16:00:33.685634 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxgg6\" (UniqueName: \"kubernetes.io/projected/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-kube-api-access-xxgg6\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:33 crc kubenswrapper[4730]: I0320 16:00:33.685682 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:33 crc kubenswrapper[4730]: I0320 16:00:33.685696 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.147654 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-x2t9r" event={"ID":"a05675d7-cd2f-4810-862b-cb0d2d13cbdd","Type":"ContainerDied","Data":"31cc41440870730294a7b216b0c2b45c7a76296191729b22f502e1990a4511bd"} Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.147708 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31cc41440870730294a7b216b0c2b45c7a76296191729b22f502e1990a4511bd" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.147795 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-x2t9r" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.149396 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6fb7949f77-2l9t7" event={"ID":"e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d","Type":"ContainerStarted","Data":"b29c80237578b12b920e2e64f747c1b3f7795837afb2efdcf749183f1cac741c"} Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.149869 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6fb7949f77-2l9t7" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.151038 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.151059 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.152986 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.153030 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.185083 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6fb7949f77-2l9t7" podStartSLOduration=2.185064266 podStartE2EDuration="2.185064266s" podCreationTimestamp="2026-03-20 16:00:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:00:34.181469556 +0000 UTC m=+1293.394840925" watchObservedRunningTime="2026-03-20 16:00:34.185064266 +0000 UTC m=+1293.398435645" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.192429 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-combined-ca-bundle\") pod \"a05675d7-cd2f-4810-862b-cb0d2d13cbdd\" (UID: \"a05675d7-cd2f-4810-862b-cb0d2d13cbdd\") " Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.197047 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a05675d7-cd2f-4810-862b-cb0d2d13cbdd" (UID: "a05675d7-cd2f-4810-862b-cb0d2d13cbdd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.245037 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.289195 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-78b446cdb6-zs6nw"] Mar 20 16:00:34 crc kubenswrapper[4730]: E0320 16:00:34.289627 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a05675d7-cd2f-4810-862b-cb0d2d13cbdd" containerName="placement-db-sync" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.289650 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="a05675d7-cd2f-4810-862b-cb0d2d13cbdd" containerName="placement-db-sync" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.289848 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="a05675d7-cd2f-4810-862b-cb0d2d13cbdd" containerName="placement-db-sync" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.290800 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78b446cdb6-zs6nw" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.297603 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05675d7-cd2f-4810-862b-cb0d2d13cbdd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.297955 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.298346 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.302122 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-78b446cdb6-zs6nw"] Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.399182 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2885c5d-681f-4e22-bdeb-b716957d83e1-logs\") pod \"placement-78b446cdb6-zs6nw\" (UID: \"d2885c5d-681f-4e22-bdeb-b716957d83e1\") " pod="openstack/placement-78b446cdb6-zs6nw" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.399312 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2885c5d-681f-4e22-bdeb-b716957d83e1-scripts\") pod \"placement-78b446cdb6-zs6nw\" (UID: \"d2885c5d-681f-4e22-bdeb-b716957d83e1\") " pod="openstack/placement-78b446cdb6-zs6nw" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.399361 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2885c5d-681f-4e22-bdeb-b716957d83e1-combined-ca-bundle\") pod \"placement-78b446cdb6-zs6nw\" (UID: \"d2885c5d-681f-4e22-bdeb-b716957d83e1\") " pod="openstack/placement-78b446cdb6-zs6nw" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.399458 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2885c5d-681f-4e22-bdeb-b716957d83e1-config-data\") pod \"placement-78b446cdb6-zs6nw\" (UID: \"d2885c5d-681f-4e22-bdeb-b716957d83e1\") " pod="openstack/placement-78b446cdb6-zs6nw" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.399496 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmk8n\" (UniqueName: \"kubernetes.io/projected/d2885c5d-681f-4e22-bdeb-b716957d83e1-kube-api-access-qmk8n\") pod \"placement-78b446cdb6-zs6nw\" (UID: \"d2885c5d-681f-4e22-bdeb-b716957d83e1\") " pod="openstack/placement-78b446cdb6-zs6nw" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.399518 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2885c5d-681f-4e22-bdeb-b716957d83e1-public-tls-certs\") pod \"placement-78b446cdb6-zs6nw\" (UID: \"d2885c5d-681f-4e22-bdeb-b716957d83e1\") " pod="openstack/placement-78b446cdb6-zs6nw" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.399576 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2885c5d-681f-4e22-bdeb-b716957d83e1-internal-tls-certs\") pod \"placement-78b446cdb6-zs6nw\" (UID: \"d2885c5d-681f-4e22-bdeb-b716957d83e1\") " pod="openstack/placement-78b446cdb6-zs6nw" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.501812 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2885c5d-681f-4e22-bdeb-b716957d83e1-combined-ca-bundle\") pod \"placement-78b446cdb6-zs6nw\" (UID: \"d2885c5d-681f-4e22-bdeb-b716957d83e1\") " pod="openstack/placement-78b446cdb6-zs6nw" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.501911 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2885c5d-681f-4e22-bdeb-b716957d83e1-config-data\") pod \"placement-78b446cdb6-zs6nw\" (UID: \"d2885c5d-681f-4e22-bdeb-b716957d83e1\") " pod="openstack/placement-78b446cdb6-zs6nw" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.501942 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmk8n\" (UniqueName: \"kubernetes.io/projected/d2885c5d-681f-4e22-bdeb-b716957d83e1-kube-api-access-qmk8n\") pod \"placement-78b446cdb6-zs6nw\" (UID: \"d2885c5d-681f-4e22-bdeb-b716957d83e1\") " pod="openstack/placement-78b446cdb6-zs6nw" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.501968 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2885c5d-681f-4e22-bdeb-b716957d83e1-public-tls-certs\") pod \"placement-78b446cdb6-zs6nw\" (UID: \"d2885c5d-681f-4e22-bdeb-b716957d83e1\") " pod="openstack/placement-78b446cdb6-zs6nw" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.502008 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2885c5d-681f-4e22-bdeb-b716957d83e1-internal-tls-certs\") pod \"placement-78b446cdb6-zs6nw\" (UID: \"d2885c5d-681f-4e22-bdeb-b716957d83e1\") " pod="openstack/placement-78b446cdb6-zs6nw" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.502062 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2885c5d-681f-4e22-bdeb-b716957d83e1-logs\") pod \"placement-78b446cdb6-zs6nw\" (UID: \"d2885c5d-681f-4e22-bdeb-b716957d83e1\") " pod="openstack/placement-78b446cdb6-zs6nw" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.502116 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2885c5d-681f-4e22-bdeb-b716957d83e1-scripts\") pod \"placement-78b446cdb6-zs6nw\" (UID: \"d2885c5d-681f-4e22-bdeb-b716957d83e1\") " pod="openstack/placement-78b446cdb6-zs6nw" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.518870 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2885c5d-681f-4e22-bdeb-b716957d83e1-logs\") pod \"placement-78b446cdb6-zs6nw\" (UID: \"d2885c5d-681f-4e22-bdeb-b716957d83e1\") " pod="openstack/placement-78b446cdb6-zs6nw" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.521789 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2885c5d-681f-4e22-bdeb-b716957d83e1-scripts\") pod \"placement-78b446cdb6-zs6nw\" (UID: \"d2885c5d-681f-4e22-bdeb-b716957d83e1\") " pod="openstack/placement-78b446cdb6-zs6nw" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.522835 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2885c5d-681f-4e22-bdeb-b716957d83e1-internal-tls-certs\") pod \"placement-78b446cdb6-zs6nw\" (UID: \"d2885c5d-681f-4e22-bdeb-b716957d83e1\") " pod="openstack/placement-78b446cdb6-zs6nw" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.524380 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2885c5d-681f-4e22-bdeb-b716957d83e1-combined-ca-bundle\") pod \"placement-78b446cdb6-zs6nw\" (UID: \"d2885c5d-681f-4e22-bdeb-b716957d83e1\") " pod="openstack/placement-78b446cdb6-zs6nw" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.524429 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2885c5d-681f-4e22-bdeb-b716957d83e1-config-data\") pod \"placement-78b446cdb6-zs6nw\" (UID: \"d2885c5d-681f-4e22-bdeb-b716957d83e1\") " pod="openstack/placement-78b446cdb6-zs6nw" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.525327 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2885c5d-681f-4e22-bdeb-b716957d83e1-public-tls-certs\") pod \"placement-78b446cdb6-zs6nw\" (UID: \"d2885c5d-681f-4e22-bdeb-b716957d83e1\") " pod="openstack/placement-78b446cdb6-zs6nw" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.537713 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmk8n\" (UniqueName: \"kubernetes.io/projected/d2885c5d-681f-4e22-bdeb-b716957d83e1-kube-api-access-qmk8n\") pod \"placement-78b446cdb6-zs6nw\" (UID: \"d2885c5d-681f-4e22-bdeb-b716957d83e1\") " pod="openstack/placement-78b446cdb6-zs6nw" Mar 20 16:00:34 crc kubenswrapper[4730]: I0320 16:00:34.609762 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78b446cdb6-zs6nw" Mar 20 16:00:35 crc kubenswrapper[4730]: I0320 16:00:35.164878 4730 generic.go:334] "Generic (PLEG): container finished" podID="48fc8af0-e30f-4f3f-88d3-8b054c6359ef" containerID="59a0ed1595de1b0849599bb5a7c10e7cfbb46ad061c13c2ab2d12fc1bc355373" exitCode=0 Mar 20 16:00:35 crc kubenswrapper[4730]: I0320 16:00:35.164952 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-tz6x7" event={"ID":"48fc8af0-e30f-4f3f-88d3-8b054c6359ef","Type":"ContainerDied","Data":"59a0ed1595de1b0849599bb5a7c10e7cfbb46ad061c13c2ab2d12fc1bc355373"} Mar 20 16:00:35 crc kubenswrapper[4730]: I0320 16:00:35.206593 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-78b446cdb6-zs6nw"] Mar 20 16:00:35 crc kubenswrapper[4730]: W0320 16:00:35.210204 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2885c5d_681f_4e22_bdeb_b716957d83e1.slice/crio-421f8aaa4b2b066ba0db639897a31bc1c26529226105978e2e6c3ddf0c3f4ce8 WatchSource:0}: Error finding container 421f8aaa4b2b066ba0db639897a31bc1c26529226105978e2e6c3ddf0c3f4ce8: Status 404 returned error can't find the container with id 421f8aaa4b2b066ba0db639897a31bc1c26529226105978e2e6c3ddf0c3f4ce8 Mar 20 16:00:35 crc kubenswrapper[4730]: I0320 16:00:35.979164 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.140051 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/468631ad-821b-469e-a166-1d32d370e5fa-config-data\") pod \"468631ad-821b-469e-a166-1d32d370e5fa\" (UID: \"468631ad-821b-469e-a166-1d32d370e5fa\") " Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.140192 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnvh7\" (UniqueName: \"kubernetes.io/projected/468631ad-821b-469e-a166-1d32d370e5fa-kube-api-access-jnvh7\") pod \"468631ad-821b-469e-a166-1d32d370e5fa\" (UID: \"468631ad-821b-469e-a166-1d32d370e5fa\") " Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.140294 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468631ad-821b-469e-a166-1d32d370e5fa-combined-ca-bundle\") pod \"468631ad-821b-469e-a166-1d32d370e5fa\" (UID: \"468631ad-821b-469e-a166-1d32d370e5fa\") " Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.140356 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/468631ad-821b-469e-a166-1d32d370e5fa-logs\") pod \"468631ad-821b-469e-a166-1d32d370e5fa\" (UID: \"468631ad-821b-469e-a166-1d32d370e5fa\") " Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.140435 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/468631ad-821b-469e-a166-1d32d370e5fa-custom-prometheus-ca\") pod \"468631ad-821b-469e-a166-1d32d370e5fa\" (UID: \"468631ad-821b-469e-a166-1d32d370e5fa\") " Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.140867 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/468631ad-821b-469e-a166-1d32d370e5fa-logs" (OuterVolumeSpecName: "logs") pod "468631ad-821b-469e-a166-1d32d370e5fa" (UID: "468631ad-821b-469e-a166-1d32d370e5fa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.148569 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/468631ad-821b-469e-a166-1d32d370e5fa-kube-api-access-jnvh7" (OuterVolumeSpecName: "kube-api-access-jnvh7") pod "468631ad-821b-469e-a166-1d32d370e5fa" (UID: "468631ad-821b-469e-a166-1d32d370e5fa"). InnerVolumeSpecName "kube-api-access-jnvh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.178602 4730 generic.go:334] "Generic (PLEG): container finished" podID="468631ad-821b-469e-a166-1d32d370e5fa" containerID="3f2599acfa566a2a6933c070fb527ae037e48d58578163cf559260ee0ee91126" exitCode=137 Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.178729 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.178798 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"468631ad-821b-469e-a166-1d32d370e5fa","Type":"ContainerDied","Data":"3f2599acfa566a2a6933c070fb527ae037e48d58578163cf559260ee0ee91126"} Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.178854 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"468631ad-821b-469e-a166-1d32d370e5fa","Type":"ContainerDied","Data":"61de70dac9ca369e87e9de4023e0b6d3ab234abdc4d63ca022c46c3f926b57cd"} Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.178874 4730 scope.go:117] "RemoveContainer" containerID="3f2599acfa566a2a6933c070fb527ae037e48d58578163cf559260ee0ee91126" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.184640 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78b446cdb6-zs6nw" event={"ID":"d2885c5d-681f-4e22-bdeb-b716957d83e1","Type":"ContainerStarted","Data":"c7752714919e7683b19fa98a27dc456cceb1ff84e231d081c0bf6cdf50a72206"} Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.184681 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78b446cdb6-zs6nw" event={"ID":"d2885c5d-681f-4e22-bdeb-b716957d83e1","Type":"ContainerStarted","Data":"7eb26d82b530714f030687355ff1803daa124db098cf71b7400758d270fece6e"} Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.184697 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78b446cdb6-zs6nw" event={"ID":"d2885c5d-681f-4e22-bdeb-b716957d83e1","Type":"ContainerStarted","Data":"421f8aaa4b2b066ba0db639897a31bc1c26529226105978e2e6c3ddf0c3f4ce8"} Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.184714 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-78b446cdb6-zs6nw" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.184767 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.184777 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.185365 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.185383 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.185932 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-78b446cdb6-zs6nw" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.186479 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/468631ad-821b-469e-a166-1d32d370e5fa-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "468631ad-821b-469e-a166-1d32d370e5fa" (UID: "468631ad-821b-469e-a166-1d32d370e5fa"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.188167 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/468631ad-821b-469e-a166-1d32d370e5fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "468631ad-821b-469e-a166-1d32d370e5fa" (UID: "468631ad-821b-469e-a166-1d32d370e5fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.216690 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/468631ad-821b-469e-a166-1d32d370e5fa-config-data" (OuterVolumeSpecName: "config-data") pod "468631ad-821b-469e-a166-1d32d370e5fa" (UID: "468631ad-821b-469e-a166-1d32d370e5fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.219477 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-78b446cdb6-zs6nw" podStartSLOduration=2.219462741 podStartE2EDuration="2.219462741s" podCreationTimestamp="2026-03-20 16:00:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:00:36.213806865 +0000 UTC m=+1295.427178234" watchObservedRunningTime="2026-03-20 16:00:36.219462741 +0000 UTC m=+1295.432834110" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.244376 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/468631ad-821b-469e-a166-1d32d370e5fa-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.244408 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnvh7\" (UniqueName: \"kubernetes.io/projected/468631ad-821b-469e-a166-1d32d370e5fa-kube-api-access-jnvh7\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.244419 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468631ad-821b-469e-a166-1d32d370e5fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.244429 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/468631ad-821b-469e-a166-1d32d370e5fa-logs\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.244438 4730 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/468631ad-821b-469e-a166-1d32d370e5fa-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.342393 4730 scope.go:117] "RemoveContainer" containerID="f7dc5000437c6d89457d868ca7a318170322b81dfd62eee1c3bc3df85ceba4e5" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.366931 4730 scope.go:117] "RemoveContainer" containerID="3f2599acfa566a2a6933c070fb527ae037e48d58578163cf559260ee0ee91126" Mar 20 16:00:36 crc kubenswrapper[4730]: E0320 16:00:36.372088 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f2599acfa566a2a6933c070fb527ae037e48d58578163cf559260ee0ee91126\": container with ID starting with 3f2599acfa566a2a6933c070fb527ae037e48d58578163cf559260ee0ee91126 not found: ID does not exist" containerID="3f2599acfa566a2a6933c070fb527ae037e48d58578163cf559260ee0ee91126" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.372137 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f2599acfa566a2a6933c070fb527ae037e48d58578163cf559260ee0ee91126"} err="failed to get container status \"3f2599acfa566a2a6933c070fb527ae037e48d58578163cf559260ee0ee91126\": rpc error: code = NotFound desc = could not find container \"3f2599acfa566a2a6933c070fb527ae037e48d58578163cf559260ee0ee91126\": container with ID starting with 3f2599acfa566a2a6933c070fb527ae037e48d58578163cf559260ee0ee91126 not found: ID does not exist" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.372174 4730 scope.go:117] "RemoveContainer" containerID="f7dc5000437c6d89457d868ca7a318170322b81dfd62eee1c3bc3df85ceba4e5" Mar 20 16:00:36 crc kubenswrapper[4730]: E0320 16:00:36.372514 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7dc5000437c6d89457d868ca7a318170322b81dfd62eee1c3bc3df85ceba4e5\": container with ID starting with f7dc5000437c6d89457d868ca7a318170322b81dfd62eee1c3bc3df85ceba4e5 not found: ID does not exist" containerID="f7dc5000437c6d89457d868ca7a318170322b81dfd62eee1c3bc3df85ceba4e5" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.372561 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7dc5000437c6d89457d868ca7a318170322b81dfd62eee1c3bc3df85ceba4e5"} err="failed to get container status \"f7dc5000437c6d89457d868ca7a318170322b81dfd62eee1c3bc3df85ceba4e5\": rpc error: code = NotFound desc = could not find container \"f7dc5000437c6d89457d868ca7a318170322b81dfd62eee1c3bc3df85ceba4e5\": container with ID starting with f7dc5000437c6d89457d868ca7a318170322b81dfd62eee1c3bc3df85ceba4e5 not found: ID does not exist" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.512003 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-tz6x7" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.554403 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.572442 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.593690 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Mar 20 16:00:36 crc kubenswrapper[4730]: E0320 16:00:36.594098 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="468631ad-821b-469e-a166-1d32d370e5fa" containerName="watcher-api-log" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.594117 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="468631ad-821b-469e-a166-1d32d370e5fa" containerName="watcher-api-log" Mar 20 16:00:36 crc kubenswrapper[4730]: E0320 16:00:36.594149 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48fc8af0-e30f-4f3f-88d3-8b054c6359ef" containerName="barbican-db-sync" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.594156 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="48fc8af0-e30f-4f3f-88d3-8b054c6359ef" containerName="barbican-db-sync" Mar 20 16:00:36 crc kubenswrapper[4730]: E0320 16:00:36.594170 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="468631ad-821b-469e-a166-1d32d370e5fa" containerName="watcher-api" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.594177 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="468631ad-821b-469e-a166-1d32d370e5fa" containerName="watcher-api" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.594409 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="48fc8af0-e30f-4f3f-88d3-8b054c6359ef" containerName="barbican-db-sync" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.594428 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="468631ad-821b-469e-a166-1d32d370e5fa" containerName="watcher-api" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.594440 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="468631ad-821b-469e-a166-1d32d370e5fa" containerName="watcher-api-log" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.595718 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.599742 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.602705 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.664933 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48fc8af0-e30f-4f3f-88d3-8b054c6359ef-combined-ca-bundle\") pod \"48fc8af0-e30f-4f3f-88d3-8b054c6359ef\" (UID: \"48fc8af0-e30f-4f3f-88d3-8b054c6359ef\") " Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.665017 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/48fc8af0-e30f-4f3f-88d3-8b054c6359ef-db-sync-config-data\") pod \"48fc8af0-e30f-4f3f-88d3-8b054c6359ef\" (UID: \"48fc8af0-e30f-4f3f-88d3-8b054c6359ef\") " Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.665100 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bggsr\" (UniqueName: \"kubernetes.io/projected/48fc8af0-e30f-4f3f-88d3-8b054c6359ef-kube-api-access-bggsr\") pod \"48fc8af0-e30f-4f3f-88d3-8b054c6359ef\" (UID: \"48fc8af0-e30f-4f3f-88d3-8b054c6359ef\") " Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.669765 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48fc8af0-e30f-4f3f-88d3-8b054c6359ef-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "48fc8af0-e30f-4f3f-88d3-8b054c6359ef" (UID: "48fc8af0-e30f-4f3f-88d3-8b054c6359ef"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.669957 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48fc8af0-e30f-4f3f-88d3-8b054c6359ef-kube-api-access-bggsr" (OuterVolumeSpecName: "kube-api-access-bggsr") pod "48fc8af0-e30f-4f3f-88d3-8b054c6359ef" (UID: "48fc8af0-e30f-4f3f-88d3-8b054c6359ef"). InnerVolumeSpecName "kube-api-access-bggsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.689555 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48fc8af0-e30f-4f3f-88d3-8b054c6359ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48fc8af0-e30f-4f3f-88d3-8b054c6359ef" (UID: "48fc8af0-e30f-4f3f-88d3-8b054c6359ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.747137 4730 scope.go:117] "RemoveContainer" containerID="cd338cd8acc0dfd58bb17dd35d4fa074369101fa940bc1e78ceafdde3c9aa8ec" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.767422 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2lq6\" (UniqueName: \"kubernetes.io/projected/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-kube-api-access-m2lq6\") pod \"watcher-api-0\" (UID: \"aabd3bd6-2cee-47b8-9174-ad9ea1415e82\") " pod="openstack/watcher-api-0" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.767524 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-config-data\") pod \"watcher-api-0\" (UID: \"aabd3bd6-2cee-47b8-9174-ad9ea1415e82\") " pod="openstack/watcher-api-0" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.767672 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"aabd3bd6-2cee-47b8-9174-ad9ea1415e82\") " pod="openstack/watcher-api-0" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.767705 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-logs\") pod \"watcher-api-0\" (UID: \"aabd3bd6-2cee-47b8-9174-ad9ea1415e82\") " pod="openstack/watcher-api-0" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.767725 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"aabd3bd6-2cee-47b8-9174-ad9ea1415e82\") " pod="openstack/watcher-api-0" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.767811 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48fc8af0-e30f-4f3f-88d3-8b054c6359ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.767822 4730 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/48fc8af0-e30f-4f3f-88d3-8b054c6359ef-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.767831 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bggsr\" (UniqueName: \"kubernetes.io/projected/48fc8af0-e30f-4f3f-88d3-8b054c6359ef-kube-api-access-bggsr\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.869635 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2lq6\" (UniqueName: \"kubernetes.io/projected/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-kube-api-access-m2lq6\") pod \"watcher-api-0\" (UID: \"aabd3bd6-2cee-47b8-9174-ad9ea1415e82\") " pod="openstack/watcher-api-0" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.870045 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-config-data\") pod \"watcher-api-0\" (UID: \"aabd3bd6-2cee-47b8-9174-ad9ea1415e82\") " pod="openstack/watcher-api-0" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.870145 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"aabd3bd6-2cee-47b8-9174-ad9ea1415e82\") " pod="openstack/watcher-api-0" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.870162 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"aabd3bd6-2cee-47b8-9174-ad9ea1415e82\") " pod="openstack/watcher-api-0" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.870177 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-logs\") pod \"watcher-api-0\" (UID: \"aabd3bd6-2cee-47b8-9174-ad9ea1415e82\") " pod="openstack/watcher-api-0" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.870877 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-logs\") pod \"watcher-api-0\" (UID: \"aabd3bd6-2cee-47b8-9174-ad9ea1415e82\") " pod="openstack/watcher-api-0" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.892481 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-config-data\") pod \"watcher-api-0\" (UID: \"aabd3bd6-2cee-47b8-9174-ad9ea1415e82\") " pod="openstack/watcher-api-0" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.892553 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"aabd3bd6-2cee-47b8-9174-ad9ea1415e82\") " pod="openstack/watcher-api-0" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.893039 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"aabd3bd6-2cee-47b8-9174-ad9ea1415e82\") " pod="openstack/watcher-api-0" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.894806 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2lq6\" (UniqueName: \"kubernetes.io/projected/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-kube-api-access-m2lq6\") pod \"watcher-api-0\" (UID: \"aabd3bd6-2cee-47b8-9174-ad9ea1415e82\") " pod="openstack/watcher-api-0" Mar 20 16:00:36 crc kubenswrapper[4730]: I0320 16:00:36.915115 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.105307 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.107550 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.131187 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.131869 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.220673 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-tz6x7" event={"ID":"48fc8af0-e30f-4f3f-88d3-8b054c6359ef","Type":"ContainerDied","Data":"af7beef135dc284222c89d8da5556d80f3f65072f0a1d94b5d3c1cbd5f0ae59a"} Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.220717 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af7beef135dc284222c89d8da5556d80f3f65072f0a1d94b5d3c1cbd5f0ae59a" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.220780 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-tz6x7" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.438229 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-86bc9f54b4-6szxq"] Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.439696 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-86bc9f54b4-6szxq" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.451561 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.451834 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-wcvgq" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.453446 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.482301 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-54b9958865-vn9kj"] Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.483851 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-54b9958865-vn9kj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.487223 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.493396 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-86bc9f54b4-6szxq"] Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.520294 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-54b9958865-vn9kj"] Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.597374 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="468631ad-821b-469e-a166-1d32d370e5fa" path="/var/lib/kubelet/pods/468631ad-821b-469e-a166-1d32d370e5fa/volumes" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.598102 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6dccc4d8b9-qqxcj"] Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.602201 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dccc4d8b9-qqxcj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.613505 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dccc4d8b9-qqxcj"] Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.627833 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f8c40f6-c8d3-4c8c-97eb-643d32774174-config-data\") pod \"barbican-worker-54b9958865-vn9kj\" (UID: \"8f8c40f6-c8d3-4c8c-97eb-643d32774174\") " pod="openstack/barbican-worker-54b9958865-vn9kj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.627918 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4dfee88-47ff-4e8b-9f46-60cc17fb0080-config-data-custom\") pod \"barbican-keystone-listener-86bc9f54b4-6szxq\" (UID: \"e4dfee88-47ff-4e8b-9f46-60cc17fb0080\") " pod="openstack/barbican-keystone-listener-86bc9f54b4-6szxq" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.627954 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4dfee88-47ff-4e8b-9f46-60cc17fb0080-logs\") pod \"barbican-keystone-listener-86bc9f54b4-6szxq\" (UID: \"e4dfee88-47ff-4e8b-9f46-60cc17fb0080\") " pod="openstack/barbican-keystone-listener-86bc9f54b4-6szxq" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.627972 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-dns-svc\") pod \"dnsmasq-dns-6dccc4d8b9-qqxcj\" (UID: \"fc765c5c-7def-4230-b500-d6410c2da475\") " pod="openstack/dnsmasq-dns-6dccc4d8b9-qqxcj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.627988 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-config\") pod \"dnsmasq-dns-6dccc4d8b9-qqxcj\" (UID: \"fc765c5c-7def-4230-b500-d6410c2da475\") " pod="openstack/dnsmasq-dns-6dccc4d8b9-qqxcj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.628012 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9k6g\" (UniqueName: \"kubernetes.io/projected/fc765c5c-7def-4230-b500-d6410c2da475-kube-api-access-q9k6g\") pod \"dnsmasq-dns-6dccc4d8b9-qqxcj\" (UID: \"fc765c5c-7def-4230-b500-d6410c2da475\") " pod="openstack/dnsmasq-dns-6dccc4d8b9-qqxcj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.628029 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f8c40f6-c8d3-4c8c-97eb-643d32774174-config-data-custom\") pod \"barbican-worker-54b9958865-vn9kj\" (UID: \"8f8c40f6-c8d3-4c8c-97eb-643d32774174\") " pod="openstack/barbican-worker-54b9958865-vn9kj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.628043 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvpbl\" (UniqueName: \"kubernetes.io/projected/8f8c40f6-c8d3-4c8c-97eb-643d32774174-kube-api-access-rvpbl\") pod \"barbican-worker-54b9958865-vn9kj\" (UID: \"8f8c40f6-c8d3-4c8c-97eb-643d32774174\") " pod="openstack/barbican-worker-54b9958865-vn9kj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.628058 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4dfee88-47ff-4e8b-9f46-60cc17fb0080-config-data\") pod \"barbican-keystone-listener-86bc9f54b4-6szxq\" (UID: \"e4dfee88-47ff-4e8b-9f46-60cc17fb0080\") " pod="openstack/barbican-keystone-listener-86bc9f54b4-6szxq" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.628076 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-ovsdbserver-sb\") pod \"dnsmasq-dns-6dccc4d8b9-qqxcj\" (UID: \"fc765c5c-7def-4230-b500-d6410c2da475\") " pod="openstack/dnsmasq-dns-6dccc4d8b9-qqxcj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.628101 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f8c40f6-c8d3-4c8c-97eb-643d32774174-combined-ca-bundle\") pod \"barbican-worker-54b9958865-vn9kj\" (UID: \"8f8c40f6-c8d3-4c8c-97eb-643d32774174\") " pod="openstack/barbican-worker-54b9958865-vn9kj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.628131 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4dfee88-47ff-4e8b-9f46-60cc17fb0080-combined-ca-bundle\") pod \"barbican-keystone-listener-86bc9f54b4-6szxq\" (UID: \"e4dfee88-47ff-4e8b-9f46-60cc17fb0080\") " pod="openstack/barbican-keystone-listener-86bc9f54b4-6szxq" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.628160 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-ovsdbserver-nb\") pod \"dnsmasq-dns-6dccc4d8b9-qqxcj\" (UID: \"fc765c5c-7def-4230-b500-d6410c2da475\") " pod="openstack/dnsmasq-dns-6dccc4d8b9-qqxcj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.628174 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f8c40f6-c8d3-4c8c-97eb-643d32774174-logs\") pod \"barbican-worker-54b9958865-vn9kj\" (UID: \"8f8c40f6-c8d3-4c8c-97eb-643d32774174\") " pod="openstack/barbican-worker-54b9958865-vn9kj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.628188 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcckp\" (UniqueName: \"kubernetes.io/projected/e4dfee88-47ff-4e8b-9f46-60cc17fb0080-kube-api-access-jcckp\") pod \"barbican-keystone-listener-86bc9f54b4-6szxq\" (UID: \"e4dfee88-47ff-4e8b-9f46-60cc17fb0080\") " pod="openstack/barbican-keystone-listener-86bc9f54b4-6szxq" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.628205 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-dns-swift-storage-0\") pod \"dnsmasq-dns-6dccc4d8b9-qqxcj\" (UID: \"fc765c5c-7def-4230-b500-d6410c2da475\") " pod="openstack/dnsmasq-dns-6dccc4d8b9-qqxcj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.730614 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-ovsdbserver-nb\") pod \"dnsmasq-dns-6dccc4d8b9-qqxcj\" (UID: \"fc765c5c-7def-4230-b500-d6410c2da475\") " pod="openstack/dnsmasq-dns-6dccc4d8b9-qqxcj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.730662 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f8c40f6-c8d3-4c8c-97eb-643d32774174-logs\") pod \"barbican-worker-54b9958865-vn9kj\" (UID: \"8f8c40f6-c8d3-4c8c-97eb-643d32774174\") " pod="openstack/barbican-worker-54b9958865-vn9kj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.730687 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcckp\" (UniqueName: \"kubernetes.io/projected/e4dfee88-47ff-4e8b-9f46-60cc17fb0080-kube-api-access-jcckp\") pod \"barbican-keystone-listener-86bc9f54b4-6szxq\" (UID: \"e4dfee88-47ff-4e8b-9f46-60cc17fb0080\") " pod="openstack/barbican-keystone-listener-86bc9f54b4-6szxq" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.730716 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-dns-swift-storage-0\") pod \"dnsmasq-dns-6dccc4d8b9-qqxcj\" (UID: \"fc765c5c-7def-4230-b500-d6410c2da475\") " pod="openstack/dnsmasq-dns-6dccc4d8b9-qqxcj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.730770 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f8c40f6-c8d3-4c8c-97eb-643d32774174-config-data\") pod \"barbican-worker-54b9958865-vn9kj\" (UID: \"8f8c40f6-c8d3-4c8c-97eb-643d32774174\") " pod="openstack/barbican-worker-54b9958865-vn9kj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.730837 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4dfee88-47ff-4e8b-9f46-60cc17fb0080-config-data-custom\") pod \"barbican-keystone-listener-86bc9f54b4-6szxq\" (UID: \"e4dfee88-47ff-4e8b-9f46-60cc17fb0080\") " pod="openstack/barbican-keystone-listener-86bc9f54b4-6szxq" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.730871 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4dfee88-47ff-4e8b-9f46-60cc17fb0080-logs\") pod \"barbican-keystone-listener-86bc9f54b4-6szxq\" (UID: \"e4dfee88-47ff-4e8b-9f46-60cc17fb0080\") " pod="openstack/barbican-keystone-listener-86bc9f54b4-6szxq" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.730890 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-dns-svc\") pod \"dnsmasq-dns-6dccc4d8b9-qqxcj\" (UID: \"fc765c5c-7def-4230-b500-d6410c2da475\") " pod="openstack/dnsmasq-dns-6dccc4d8b9-qqxcj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.730908 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-config\") pod \"dnsmasq-dns-6dccc4d8b9-qqxcj\" (UID: \"fc765c5c-7def-4230-b500-d6410c2da475\") " pod="openstack/dnsmasq-dns-6dccc4d8b9-qqxcj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.730935 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9k6g\" (UniqueName: \"kubernetes.io/projected/fc765c5c-7def-4230-b500-d6410c2da475-kube-api-access-q9k6g\") pod \"dnsmasq-dns-6dccc4d8b9-qqxcj\" (UID: \"fc765c5c-7def-4230-b500-d6410c2da475\") " pod="openstack/dnsmasq-dns-6dccc4d8b9-qqxcj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.730949 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f8c40f6-c8d3-4c8c-97eb-643d32774174-config-data-custom\") pod \"barbican-worker-54b9958865-vn9kj\" (UID: \"8f8c40f6-c8d3-4c8c-97eb-643d32774174\") " pod="openstack/barbican-worker-54b9958865-vn9kj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.730965 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvpbl\" (UniqueName: \"kubernetes.io/projected/8f8c40f6-c8d3-4c8c-97eb-643d32774174-kube-api-access-rvpbl\") pod \"barbican-worker-54b9958865-vn9kj\" (UID: \"8f8c40f6-c8d3-4c8c-97eb-643d32774174\") " pod="openstack/barbican-worker-54b9958865-vn9kj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.730983 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4dfee88-47ff-4e8b-9f46-60cc17fb0080-config-data\") pod \"barbican-keystone-listener-86bc9f54b4-6szxq\" (UID: \"e4dfee88-47ff-4e8b-9f46-60cc17fb0080\") " pod="openstack/barbican-keystone-listener-86bc9f54b4-6szxq" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.730999 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-ovsdbserver-sb\") pod \"dnsmasq-dns-6dccc4d8b9-qqxcj\" (UID: \"fc765c5c-7def-4230-b500-d6410c2da475\") " pod="openstack/dnsmasq-dns-6dccc4d8b9-qqxcj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.731047 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f8c40f6-c8d3-4c8c-97eb-643d32774174-combined-ca-bundle\") pod \"barbican-worker-54b9958865-vn9kj\" (UID: \"8f8c40f6-c8d3-4c8c-97eb-643d32774174\") " pod="openstack/barbican-worker-54b9958865-vn9kj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.731077 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4dfee88-47ff-4e8b-9f46-60cc17fb0080-combined-ca-bundle\") pod \"barbican-keystone-listener-86bc9f54b4-6szxq\" (UID: \"e4dfee88-47ff-4e8b-9f46-60cc17fb0080\") " pod="openstack/barbican-keystone-listener-86bc9f54b4-6szxq" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.732651 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4dfee88-47ff-4e8b-9f46-60cc17fb0080-logs\") pod \"barbican-keystone-listener-86bc9f54b4-6szxq\" (UID: \"e4dfee88-47ff-4e8b-9f46-60cc17fb0080\") " pod="openstack/barbican-keystone-listener-86bc9f54b4-6szxq" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.734129 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-ovsdbserver-nb\") pod \"dnsmasq-dns-6dccc4d8b9-qqxcj\" (UID: \"fc765c5c-7def-4230-b500-d6410c2da475\") " pod="openstack/dnsmasq-dns-6dccc4d8b9-qqxcj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.734381 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f8c40f6-c8d3-4c8c-97eb-643d32774174-logs\") pod \"barbican-worker-54b9958865-vn9kj\" (UID: \"8f8c40f6-c8d3-4c8c-97eb-643d32774174\") " pod="openstack/barbican-worker-54b9958865-vn9kj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.743892 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4dfee88-47ff-4e8b-9f46-60cc17fb0080-combined-ca-bundle\") pod \"barbican-keystone-listener-86bc9f54b4-6szxq\" (UID: \"e4dfee88-47ff-4e8b-9f46-60cc17fb0080\") " pod="openstack/barbican-keystone-listener-86bc9f54b4-6szxq" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.746296 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-dns-swift-storage-0\") pod \"dnsmasq-dns-6dccc4d8b9-qqxcj\" (UID: \"fc765c5c-7def-4230-b500-d6410c2da475\") " pod="openstack/dnsmasq-dns-6dccc4d8b9-qqxcj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.747709 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-config\") pod \"dnsmasq-dns-6dccc4d8b9-qqxcj\" (UID: \"fc765c5c-7def-4230-b500-d6410c2da475\") " pod="openstack/dnsmasq-dns-6dccc4d8b9-qqxcj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.749118 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f8c40f6-c8d3-4c8c-97eb-643d32774174-combined-ca-bundle\") pod \"barbican-worker-54b9958865-vn9kj\" (UID: \"8f8c40f6-c8d3-4c8c-97eb-643d32774174\") " pod="openstack/barbican-worker-54b9958865-vn9kj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.749717 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-ovsdbserver-sb\") pod \"dnsmasq-dns-6dccc4d8b9-qqxcj\" (UID: \"fc765c5c-7def-4230-b500-d6410c2da475\") " pod="openstack/dnsmasq-dns-6dccc4d8b9-qqxcj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.750197 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-dns-svc\") pod \"dnsmasq-dns-6dccc4d8b9-qqxcj\" (UID: \"fc765c5c-7def-4230-b500-d6410c2da475\") " pod="openstack/dnsmasq-dns-6dccc4d8b9-qqxcj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.751702 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4dfee88-47ff-4e8b-9f46-60cc17fb0080-config-data\") pod \"barbican-keystone-listener-86bc9f54b4-6szxq\" (UID: \"e4dfee88-47ff-4e8b-9f46-60cc17fb0080\") " pod="openstack/barbican-keystone-listener-86bc9f54b4-6szxq" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.757820 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f8c40f6-c8d3-4c8c-97eb-643d32774174-config-data-custom\") pod \"barbican-worker-54b9958865-vn9kj\" (UID: \"8f8c40f6-c8d3-4c8c-97eb-643d32774174\") " pod="openstack/barbican-worker-54b9958865-vn9kj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.761287 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f8c40f6-c8d3-4c8c-97eb-643d32774174-config-data\") pod \"barbican-worker-54b9958865-vn9kj\" (UID: \"8f8c40f6-c8d3-4c8c-97eb-643d32774174\") " pod="openstack/barbican-worker-54b9958865-vn9kj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.765268 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcckp\" (UniqueName: \"kubernetes.io/projected/e4dfee88-47ff-4e8b-9f46-60cc17fb0080-kube-api-access-jcckp\") pod \"barbican-keystone-listener-86bc9f54b4-6szxq\" (UID: \"e4dfee88-47ff-4e8b-9f46-60cc17fb0080\") " pod="openstack/barbican-keystone-listener-86bc9f54b4-6szxq" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.766874 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4dfee88-47ff-4e8b-9f46-60cc17fb0080-config-data-custom\") pod \"barbican-keystone-listener-86bc9f54b4-6szxq\" (UID: \"e4dfee88-47ff-4e8b-9f46-60cc17fb0080\") " pod="openstack/barbican-keystone-listener-86bc9f54b4-6szxq" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.779752 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9k6g\" (UniqueName: \"kubernetes.io/projected/fc765c5c-7def-4230-b500-d6410c2da475-kube-api-access-q9k6g\") pod \"dnsmasq-dns-6dccc4d8b9-qqxcj\" (UID: \"fc765c5c-7def-4230-b500-d6410c2da475\") " pod="openstack/dnsmasq-dns-6dccc4d8b9-qqxcj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.785204 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-66f7c676c8-wdfnw"] Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.785593 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-86bc9f54b4-6szxq" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.786749 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66f7c676c8-wdfnw" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.797534 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-66f7c676c8-wdfnw"] Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.800849 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvpbl\" (UniqueName: \"kubernetes.io/projected/8f8c40f6-c8d3-4c8c-97eb-643d32774174-kube-api-access-rvpbl\") pod \"barbican-worker-54b9958865-vn9kj\" (UID: \"8f8c40f6-c8d3-4c8c-97eb-643d32774174\") " pod="openstack/barbican-worker-54b9958865-vn9kj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.801611 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.812700 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-54b9958865-vn9kj" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.936755 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/625a25c3-e585-4848-bbe1-0bdd4be731a9-config-data\") pod \"barbican-api-66f7c676c8-wdfnw\" (UID: \"625a25c3-e585-4848-bbe1-0bdd4be731a9\") " pod="openstack/barbican-api-66f7c676c8-wdfnw" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.936795 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/625a25c3-e585-4848-bbe1-0bdd4be731a9-logs\") pod \"barbican-api-66f7c676c8-wdfnw\" (UID: \"625a25c3-e585-4848-bbe1-0bdd4be731a9\") " pod="openstack/barbican-api-66f7c676c8-wdfnw" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.937762 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/625a25c3-e585-4848-bbe1-0bdd4be731a9-combined-ca-bundle\") pod \"barbican-api-66f7c676c8-wdfnw\" (UID: \"625a25c3-e585-4848-bbe1-0bdd4be731a9\") " pod="openstack/barbican-api-66f7c676c8-wdfnw" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.937815 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/625a25c3-e585-4848-bbe1-0bdd4be731a9-config-data-custom\") pod \"barbican-api-66f7c676c8-wdfnw\" (UID: \"625a25c3-e585-4848-bbe1-0bdd4be731a9\") " pod="openstack/barbican-api-66f7c676c8-wdfnw" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.938037 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfcrj\" (UniqueName: \"kubernetes.io/projected/625a25c3-e585-4848-bbe1-0bdd4be731a9-kube-api-access-nfcrj\") pod \"barbican-api-66f7c676c8-wdfnw\" (UID: \"625a25c3-e585-4848-bbe1-0bdd4be731a9\") " pod="openstack/barbican-api-66f7c676c8-wdfnw" Mar 20 16:00:37 crc kubenswrapper[4730]: I0320 16:00:37.959773 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dccc4d8b9-qqxcj" Mar 20 16:00:38 crc kubenswrapper[4730]: I0320 16:00:38.044949 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/625a25c3-e585-4848-bbe1-0bdd4be731a9-config-data\") pod \"barbican-api-66f7c676c8-wdfnw\" (UID: \"625a25c3-e585-4848-bbe1-0bdd4be731a9\") " pod="openstack/barbican-api-66f7c676c8-wdfnw" Mar 20 16:00:38 crc kubenswrapper[4730]: I0320 16:00:38.045774 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/625a25c3-e585-4848-bbe1-0bdd4be731a9-logs\") pod \"barbican-api-66f7c676c8-wdfnw\" (UID: \"625a25c3-e585-4848-bbe1-0bdd4be731a9\") " pod="openstack/barbican-api-66f7c676c8-wdfnw" Mar 20 16:00:38 crc kubenswrapper[4730]: I0320 16:00:38.045878 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/625a25c3-e585-4848-bbe1-0bdd4be731a9-combined-ca-bundle\") pod \"barbican-api-66f7c676c8-wdfnw\" (UID: \"625a25c3-e585-4848-bbe1-0bdd4be731a9\") " pod="openstack/barbican-api-66f7c676c8-wdfnw" Mar 20 16:00:38 crc kubenswrapper[4730]: I0320 16:00:38.045901 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/625a25c3-e585-4848-bbe1-0bdd4be731a9-config-data-custom\") pod \"barbican-api-66f7c676c8-wdfnw\" (UID: \"625a25c3-e585-4848-bbe1-0bdd4be731a9\") " pod="openstack/barbican-api-66f7c676c8-wdfnw" Mar 20 16:00:38 crc kubenswrapper[4730]: I0320 16:00:38.045950 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfcrj\" (UniqueName: \"kubernetes.io/projected/625a25c3-e585-4848-bbe1-0bdd4be731a9-kube-api-access-nfcrj\") pod \"barbican-api-66f7c676c8-wdfnw\" (UID: \"625a25c3-e585-4848-bbe1-0bdd4be731a9\") " pod="openstack/barbican-api-66f7c676c8-wdfnw" Mar 20 16:00:38 crc kubenswrapper[4730]: I0320 16:00:38.052110 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/625a25c3-e585-4848-bbe1-0bdd4be731a9-logs\") pod \"barbican-api-66f7c676c8-wdfnw\" (UID: \"625a25c3-e585-4848-bbe1-0bdd4be731a9\") " pod="openstack/barbican-api-66f7c676c8-wdfnw" Mar 20 16:00:38 crc kubenswrapper[4730]: I0320 16:00:38.053765 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/625a25c3-e585-4848-bbe1-0bdd4be731a9-config-data\") pod \"barbican-api-66f7c676c8-wdfnw\" (UID: \"625a25c3-e585-4848-bbe1-0bdd4be731a9\") " pod="openstack/barbican-api-66f7c676c8-wdfnw" Mar 20 16:00:38 crc kubenswrapper[4730]: I0320 16:00:38.059481 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/625a25c3-e585-4848-bbe1-0bdd4be731a9-config-data-custom\") pod \"barbican-api-66f7c676c8-wdfnw\" (UID: \"625a25c3-e585-4848-bbe1-0bdd4be731a9\") " pod="openstack/barbican-api-66f7c676c8-wdfnw" Mar 20 16:00:38 crc kubenswrapper[4730]: I0320 16:00:38.059704 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/625a25c3-e585-4848-bbe1-0bdd4be731a9-combined-ca-bundle\") pod \"barbican-api-66f7c676c8-wdfnw\" (UID: \"625a25c3-e585-4848-bbe1-0bdd4be731a9\") " pod="openstack/barbican-api-66f7c676c8-wdfnw" Mar 20 16:00:38 crc kubenswrapper[4730]: I0320 16:00:38.062549 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfcrj\" (UniqueName: \"kubernetes.io/projected/625a25c3-e585-4848-bbe1-0bdd4be731a9-kube-api-access-nfcrj\") pod \"barbican-api-66f7c676c8-wdfnw\" (UID: \"625a25c3-e585-4848-bbe1-0bdd4be731a9\") " pod="openstack/barbican-api-66f7c676c8-wdfnw" Mar 20 16:00:38 crc kubenswrapper[4730]: I0320 16:00:38.153858 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66f7c676c8-wdfnw" Mar 20 16:00:39 crc kubenswrapper[4730]: I0320 16:00:39.251280 4730 generic.go:334] "Generic (PLEG): container finished" podID="09f27249-61fb-4e13-9eb9-9b804f256d81" containerID="3f4c141955a3579b06be021435ce1c3642e2a9b4483a932d05648a4559764229" exitCode=0 Mar 20 16:00:39 crc kubenswrapper[4730]: I0320 16:00:39.251587 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-z9mtx" event={"ID":"09f27249-61fb-4e13-9eb9-9b804f256d81","Type":"ContainerDied","Data":"3f4c141955a3579b06be021435ce1c3642e2a9b4483a932d05648a4559764229"} Mar 20 16:00:39 crc kubenswrapper[4730]: I0320 16:00:39.254018 4730 generic.go:334] "Generic (PLEG): container finished" podID="6fb4d42d-6cd9-480c-8ee0-1e168504a4cd" containerID="1763f714611816ce76b822616e2726ee2af2ec1d061896faecc0edc07186595f" exitCode=0 Mar 20 16:00:39 crc kubenswrapper[4730]: I0320 16:00:39.254061 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hbplf" event={"ID":"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd","Type":"ContainerDied","Data":"1763f714611816ce76b822616e2726ee2af2ec1d061896faecc0edc07186595f"} Mar 20 16:00:39 crc kubenswrapper[4730]: I0320 16:00:39.973825 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 20 16:00:39 crc kubenswrapper[4730]: I0320 16:00:39.974520 4730 scope.go:117] "RemoveContainer" containerID="834cc7775c739ec615e04b1c22eba8f136c36cff5d344d3613f2797565551c85" Mar 20 16:00:40 crc kubenswrapper[4730]: I0320 16:00:40.570715 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-86947bcbc8-94hl8"] Mar 20 16:00:40 crc kubenswrapper[4730]: I0320 16:00:40.572471 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-86947bcbc8-94hl8" Mar 20 16:00:40 crc kubenswrapper[4730]: I0320 16:00:40.587287 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Mar 20 16:00:40 crc kubenswrapper[4730]: I0320 16:00:40.587867 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Mar 20 16:00:40 crc kubenswrapper[4730]: I0320 16:00:40.615242 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-86947bcbc8-94hl8"] Mar 20 16:00:40 crc kubenswrapper[4730]: I0320 16:00:40.701228 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59460a49-c9fe-46c9-b898-d08234ca7cd3-logs\") pod \"barbican-api-86947bcbc8-94hl8\" (UID: \"59460a49-c9fe-46c9-b898-d08234ca7cd3\") " pod="openstack/barbican-api-86947bcbc8-94hl8" Mar 20 16:00:40 crc kubenswrapper[4730]: I0320 16:00:40.701318 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59460a49-c9fe-46c9-b898-d08234ca7cd3-public-tls-certs\") pod \"barbican-api-86947bcbc8-94hl8\" (UID: \"59460a49-c9fe-46c9-b898-d08234ca7cd3\") " pod="openstack/barbican-api-86947bcbc8-94hl8" Mar 20 16:00:40 crc kubenswrapper[4730]: I0320 16:00:40.701408 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59460a49-c9fe-46c9-b898-d08234ca7cd3-combined-ca-bundle\") pod \"barbican-api-86947bcbc8-94hl8\" (UID: \"59460a49-c9fe-46c9-b898-d08234ca7cd3\") " pod="openstack/barbican-api-86947bcbc8-94hl8" Mar 20 16:00:40 crc kubenswrapper[4730]: I0320 16:00:40.701435 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59460a49-c9fe-46c9-b898-d08234ca7cd3-config-data\") pod \"barbican-api-86947bcbc8-94hl8\" (UID: \"59460a49-c9fe-46c9-b898-d08234ca7cd3\") " pod="openstack/barbican-api-86947bcbc8-94hl8" Mar 20 16:00:40 crc kubenswrapper[4730]: I0320 16:00:40.701456 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nvg6\" (UniqueName: \"kubernetes.io/projected/59460a49-c9fe-46c9-b898-d08234ca7cd3-kube-api-access-8nvg6\") pod \"barbican-api-86947bcbc8-94hl8\" (UID: \"59460a49-c9fe-46c9-b898-d08234ca7cd3\") " pod="openstack/barbican-api-86947bcbc8-94hl8" Mar 20 16:00:40 crc kubenswrapper[4730]: I0320 16:00:40.701473 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59460a49-c9fe-46c9-b898-d08234ca7cd3-config-data-custom\") pod \"barbican-api-86947bcbc8-94hl8\" (UID: \"59460a49-c9fe-46c9-b898-d08234ca7cd3\") " pod="openstack/barbican-api-86947bcbc8-94hl8" Mar 20 16:00:40 crc kubenswrapper[4730]: I0320 16:00:40.701510 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59460a49-c9fe-46c9-b898-d08234ca7cd3-internal-tls-certs\") pod \"barbican-api-86947bcbc8-94hl8\" (UID: \"59460a49-c9fe-46c9-b898-d08234ca7cd3\") " pod="openstack/barbican-api-86947bcbc8-94hl8" Mar 20 16:00:40 crc kubenswrapper[4730]: I0320 16:00:40.803400 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59460a49-c9fe-46c9-b898-d08234ca7cd3-config-data-custom\") pod \"barbican-api-86947bcbc8-94hl8\" (UID: \"59460a49-c9fe-46c9-b898-d08234ca7cd3\") " pod="openstack/barbican-api-86947bcbc8-94hl8" Mar 20 16:00:40 crc kubenswrapper[4730]: I0320 16:00:40.803470 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59460a49-c9fe-46c9-b898-d08234ca7cd3-internal-tls-certs\") pod \"barbican-api-86947bcbc8-94hl8\" (UID: \"59460a49-c9fe-46c9-b898-d08234ca7cd3\") " pod="openstack/barbican-api-86947bcbc8-94hl8" Mar 20 16:00:40 crc kubenswrapper[4730]: I0320 16:00:40.803502 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59460a49-c9fe-46c9-b898-d08234ca7cd3-logs\") pod \"barbican-api-86947bcbc8-94hl8\" (UID: \"59460a49-c9fe-46c9-b898-d08234ca7cd3\") " pod="openstack/barbican-api-86947bcbc8-94hl8" Mar 20 16:00:40 crc kubenswrapper[4730]: I0320 16:00:40.803547 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59460a49-c9fe-46c9-b898-d08234ca7cd3-public-tls-certs\") pod \"barbican-api-86947bcbc8-94hl8\" (UID: \"59460a49-c9fe-46c9-b898-d08234ca7cd3\") " pod="openstack/barbican-api-86947bcbc8-94hl8" Mar 20 16:00:40 crc kubenswrapper[4730]: I0320 16:00:40.803658 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59460a49-c9fe-46c9-b898-d08234ca7cd3-combined-ca-bundle\") pod \"barbican-api-86947bcbc8-94hl8\" (UID: \"59460a49-c9fe-46c9-b898-d08234ca7cd3\") " pod="openstack/barbican-api-86947bcbc8-94hl8" Mar 20 16:00:40 crc kubenswrapper[4730]: I0320 16:00:40.803685 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59460a49-c9fe-46c9-b898-d08234ca7cd3-config-data\") pod \"barbican-api-86947bcbc8-94hl8\" (UID: \"59460a49-c9fe-46c9-b898-d08234ca7cd3\") " pod="openstack/barbican-api-86947bcbc8-94hl8" Mar 20 16:00:40 crc kubenswrapper[4730]: I0320 16:00:40.803707 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nvg6\" (UniqueName: \"kubernetes.io/projected/59460a49-c9fe-46c9-b898-d08234ca7cd3-kube-api-access-8nvg6\") pod \"barbican-api-86947bcbc8-94hl8\" (UID: \"59460a49-c9fe-46c9-b898-d08234ca7cd3\") " pod="openstack/barbican-api-86947bcbc8-94hl8" Mar 20 16:00:40 crc kubenswrapper[4730]: I0320 16:00:40.809195 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59460a49-c9fe-46c9-b898-d08234ca7cd3-logs\") pod \"barbican-api-86947bcbc8-94hl8\" (UID: \"59460a49-c9fe-46c9-b898-d08234ca7cd3\") " pod="openstack/barbican-api-86947bcbc8-94hl8" Mar 20 16:00:40 crc kubenswrapper[4730]: I0320 16:00:40.811136 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59460a49-c9fe-46c9-b898-d08234ca7cd3-combined-ca-bundle\") pod \"barbican-api-86947bcbc8-94hl8\" (UID: \"59460a49-c9fe-46c9-b898-d08234ca7cd3\") " pod="openstack/barbican-api-86947bcbc8-94hl8" Mar 20 16:00:40 crc kubenswrapper[4730]: I0320 16:00:40.812901 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59460a49-c9fe-46c9-b898-d08234ca7cd3-public-tls-certs\") pod \"barbican-api-86947bcbc8-94hl8\" (UID: \"59460a49-c9fe-46c9-b898-d08234ca7cd3\") " pod="openstack/barbican-api-86947bcbc8-94hl8" Mar 20 16:00:40 crc kubenswrapper[4730]: I0320 16:00:40.813386 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59460a49-c9fe-46c9-b898-d08234ca7cd3-internal-tls-certs\") pod \"barbican-api-86947bcbc8-94hl8\" (UID: \"59460a49-c9fe-46c9-b898-d08234ca7cd3\") " pod="openstack/barbican-api-86947bcbc8-94hl8" Mar 20 16:00:40 crc kubenswrapper[4730]: I0320 16:00:40.819817 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59460a49-c9fe-46c9-b898-d08234ca7cd3-config-data-custom\") pod \"barbican-api-86947bcbc8-94hl8\" (UID: \"59460a49-c9fe-46c9-b898-d08234ca7cd3\") " pod="openstack/barbican-api-86947bcbc8-94hl8" Mar 20 16:00:40 crc kubenswrapper[4730]: I0320 16:00:40.830261 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59460a49-c9fe-46c9-b898-d08234ca7cd3-config-data\") pod \"barbican-api-86947bcbc8-94hl8\" (UID: \"59460a49-c9fe-46c9-b898-d08234ca7cd3\") " pod="openstack/barbican-api-86947bcbc8-94hl8" Mar 20 16:00:40 crc kubenswrapper[4730]: I0320 16:00:40.839017 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nvg6\" (UniqueName: \"kubernetes.io/projected/59460a49-c9fe-46c9-b898-d08234ca7cd3-kube-api-access-8nvg6\") pod \"barbican-api-86947bcbc8-94hl8\" (UID: \"59460a49-c9fe-46c9-b898-d08234ca7cd3\") " pod="openstack/barbican-api-86947bcbc8-94hl8" Mar 20 16:00:40 crc kubenswrapper[4730]: I0320 16:00:40.915084 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-86947bcbc8-94hl8" Mar 20 16:00:42 crc kubenswrapper[4730]: I0320 16:00:42.880515 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:00:42 crc kubenswrapper[4730]: I0320 16:00:42.881172 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:00:43 crc kubenswrapper[4730]: I0320 16:00:43.821818 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-z9mtx" Mar 20 16:00:43 crc kubenswrapper[4730]: I0320 16:00:43.831294 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hbplf" Mar 20 16:00:43 crc kubenswrapper[4730]: I0320 16:00:43.963797 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-scripts\") pod \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\" (UID: \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\") " Mar 20 16:00:43 crc kubenswrapper[4730]: I0320 16:00:43.963913 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09f27249-61fb-4e13-9eb9-9b804f256d81-combined-ca-bundle\") pod \"09f27249-61fb-4e13-9eb9-9b804f256d81\" (UID: \"09f27249-61fb-4e13-9eb9-9b804f256d81\") " Mar 20 16:00:43 crc kubenswrapper[4730]: I0320 16:00:43.963940 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24qfr\" (UniqueName: \"kubernetes.io/projected/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-kube-api-access-24qfr\") pod \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\" (UID: \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\") " Mar 20 16:00:43 crc kubenswrapper[4730]: I0320 16:00:43.964036 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-config-data\") pod \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\" (UID: \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\") " Mar 20 16:00:43 crc kubenswrapper[4730]: I0320 16:00:43.964059 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-db-sync-config-data\") pod \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\" (UID: \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\") " Mar 20 16:00:43 crc kubenswrapper[4730]: I0320 16:00:43.964078 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7sfc\" (UniqueName: \"kubernetes.io/projected/09f27249-61fb-4e13-9eb9-9b804f256d81-kube-api-access-t7sfc\") pod \"09f27249-61fb-4e13-9eb9-9b804f256d81\" (UID: \"09f27249-61fb-4e13-9eb9-9b804f256d81\") " Mar 20 16:00:43 crc kubenswrapper[4730]: I0320 16:00:43.964126 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-etc-machine-id\") pod \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\" (UID: \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\") " Mar 20 16:00:43 crc kubenswrapper[4730]: I0320 16:00:43.964162 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/09f27249-61fb-4e13-9eb9-9b804f256d81-config\") pod \"09f27249-61fb-4e13-9eb9-9b804f256d81\" (UID: \"09f27249-61fb-4e13-9eb9-9b804f256d81\") " Mar 20 16:00:43 crc kubenswrapper[4730]: I0320 16:00:43.964222 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-combined-ca-bundle\") pod \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\" (UID: \"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd\") " Mar 20 16:00:43 crc kubenswrapper[4730]: I0320 16:00:43.982188 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09f27249-61fb-4e13-9eb9-9b804f256d81-kube-api-access-t7sfc" (OuterVolumeSpecName: "kube-api-access-t7sfc") pod "09f27249-61fb-4e13-9eb9-9b804f256d81" (UID: "09f27249-61fb-4e13-9eb9-9b804f256d81"). InnerVolumeSpecName "kube-api-access-t7sfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:00:43 crc kubenswrapper[4730]: I0320 16:00:43.982340 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6fb4d42d-6cd9-480c-8ee0-1e168504a4cd" (UID: "6fb4d42d-6cd9-480c-8ee0-1e168504a4cd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 16:00:43 crc kubenswrapper[4730]: I0320 16:00:43.986430 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-kube-api-access-24qfr" (OuterVolumeSpecName: "kube-api-access-24qfr") pod "6fb4d42d-6cd9-480c-8ee0-1e168504a4cd" (UID: "6fb4d42d-6cd9-480c-8ee0-1e168504a4cd"). InnerVolumeSpecName "kube-api-access-24qfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:00:44 crc kubenswrapper[4730]: I0320 16:00:44.008629 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6fb4d42d-6cd9-480c-8ee0-1e168504a4cd" (UID: "6fb4d42d-6cd9-480c-8ee0-1e168504a4cd"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:44 crc kubenswrapper[4730]: I0320 16:00:44.010360 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-scripts" (OuterVolumeSpecName: "scripts") pod "6fb4d42d-6cd9-480c-8ee0-1e168504a4cd" (UID: "6fb4d42d-6cd9-480c-8ee0-1e168504a4cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:44 crc kubenswrapper[4730]: I0320 16:00:44.025516 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fb4d42d-6cd9-480c-8ee0-1e168504a4cd" (UID: "6fb4d42d-6cd9-480c-8ee0-1e168504a4cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:44 crc kubenswrapper[4730]: I0320 16:00:44.062409 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09f27249-61fb-4e13-9eb9-9b804f256d81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09f27249-61fb-4e13-9eb9-9b804f256d81" (UID: "09f27249-61fb-4e13-9eb9-9b804f256d81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:44 crc kubenswrapper[4730]: I0320 16:00:44.067122 4730 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:44 crc kubenswrapper[4730]: I0320 16:00:44.067149 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:44 crc kubenswrapper[4730]: I0320 16:00:44.067158 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:44 crc kubenswrapper[4730]: I0320 16:00:44.067166 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09f27249-61fb-4e13-9eb9-9b804f256d81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:44 crc kubenswrapper[4730]: I0320 16:00:44.067176 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24qfr\" (UniqueName: \"kubernetes.io/projected/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-kube-api-access-24qfr\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:44 crc kubenswrapper[4730]: I0320 16:00:44.067185 4730 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:44 crc kubenswrapper[4730]: I0320 16:00:44.067193 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7sfc\" (UniqueName: \"kubernetes.io/projected/09f27249-61fb-4e13-9eb9-9b804f256d81-kube-api-access-t7sfc\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:44 crc kubenswrapper[4730]: I0320 16:00:44.077408 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09f27249-61fb-4e13-9eb9-9b804f256d81-config" (OuterVolumeSpecName: "config") pod "09f27249-61fb-4e13-9eb9-9b804f256d81" (UID: "09f27249-61fb-4e13-9eb9-9b804f256d81"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:44 crc kubenswrapper[4730]: I0320 16:00:44.102498 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-config-data" (OuterVolumeSpecName: "config-data") pod "6fb4d42d-6cd9-480c-8ee0-1e168504a4cd" (UID: "6fb4d42d-6cd9-480c-8ee0-1e168504a4cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:44 crc kubenswrapper[4730]: I0320 16:00:44.171482 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:44 crc kubenswrapper[4730]: I0320 16:00:44.171767 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/09f27249-61fb-4e13-9eb9-9b804f256d81-config\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:44 crc kubenswrapper[4730]: I0320 16:00:44.304974 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hbplf" event={"ID":"6fb4d42d-6cd9-480c-8ee0-1e168504a4cd","Type":"ContainerDied","Data":"ad1075e305b4a94d7393955deeb754baca56955fca19f6133e616c9e84808c7e"} Mar 20 16:00:44 crc kubenswrapper[4730]: I0320 16:00:44.305049 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad1075e305b4a94d7393955deeb754baca56955fca19f6133e616c9e84808c7e" Mar 20 16:00:44 crc kubenswrapper[4730]: I0320 16:00:44.304999 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hbplf" Mar 20 16:00:44 crc kubenswrapper[4730]: I0320 16:00:44.317967 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-z9mtx" event={"ID":"09f27249-61fb-4e13-9eb9-9b804f256d81","Type":"ContainerDied","Data":"a168269d42bdf34c043574c3c59fe418fbfcc2023ca3015cc5326a7e3f76f715"} Mar 20 16:00:44 crc kubenswrapper[4730]: I0320 16:00:44.318003 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a168269d42bdf34c043574c3c59fe418fbfcc2023ca3015cc5326a7e3f76f715" Mar 20 16:00:44 crc kubenswrapper[4730]: I0320 16:00:44.318205 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-z9mtx" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.165154 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dccc4d8b9-qqxcj"] Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.207792 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 16:00:45 crc kubenswrapper[4730]: E0320 16:00:45.208468 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f27249-61fb-4e13-9eb9-9b804f256d81" containerName="neutron-db-sync" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.208489 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f27249-61fb-4e13-9eb9-9b804f256d81" containerName="neutron-db-sync" Mar 20 16:00:45 crc kubenswrapper[4730]: E0320 16:00:45.208506 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb4d42d-6cd9-480c-8ee0-1e168504a4cd" containerName="cinder-db-sync" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.208513 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb4d42d-6cd9-480c-8ee0-1e168504a4cd" containerName="cinder-db-sync" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.208745 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fb4d42d-6cd9-480c-8ee0-1e168504a4cd" containerName="cinder-db-sync" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.208778 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="09f27249-61fb-4e13-9eb9-9b804f256d81" containerName="neutron-db-sync" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.209895 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.221500 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.221847 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.222019 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bg7s8" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.222330 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.225636 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.239952 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8448dbfc69-4229t"] Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.241985 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8448dbfc69-4229t" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.279606 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8448dbfc69-4229t"] Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.295291 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-ovsdbserver-nb\") pod \"dnsmasq-dns-8448dbfc69-4229t\" (UID: \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\") " pod="openstack/dnsmasq-dns-8448dbfc69-4229t" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.295375 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/211c7770-64fe-4943-becb-bc02113fd867-scripts\") pod \"cinder-scheduler-0\" (UID: \"211c7770-64fe-4943-becb-bc02113fd867\") " pod="openstack/cinder-scheduler-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.295420 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-dns-swift-storage-0\") pod \"dnsmasq-dns-8448dbfc69-4229t\" (UID: \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\") " pod="openstack/dnsmasq-dns-8448dbfc69-4229t" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.295451 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/211c7770-64fe-4943-becb-bc02113fd867-config-data\") pod \"cinder-scheduler-0\" (UID: \"211c7770-64fe-4943-becb-bc02113fd867\") " pod="openstack/cinder-scheduler-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.295479 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/211c7770-64fe-4943-becb-bc02113fd867-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"211c7770-64fe-4943-becb-bc02113fd867\") " pod="openstack/cinder-scheduler-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.295508 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srhzx\" (UniqueName: \"kubernetes.io/projected/211c7770-64fe-4943-becb-bc02113fd867-kube-api-access-srhzx\") pod \"cinder-scheduler-0\" (UID: \"211c7770-64fe-4943-becb-bc02113fd867\") " pod="openstack/cinder-scheduler-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.295534 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9vz6\" (UniqueName: \"kubernetes.io/projected/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-kube-api-access-d9vz6\") pod \"dnsmasq-dns-8448dbfc69-4229t\" (UID: \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\") " pod="openstack/dnsmasq-dns-8448dbfc69-4229t" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.295560 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-ovsdbserver-sb\") pod \"dnsmasq-dns-8448dbfc69-4229t\" (UID: \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\") " pod="openstack/dnsmasq-dns-8448dbfc69-4229t" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.295596 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/211c7770-64fe-4943-becb-bc02113fd867-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"211c7770-64fe-4943-becb-bc02113fd867\") " pod="openstack/cinder-scheduler-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.295621 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-dns-svc\") pod \"dnsmasq-dns-8448dbfc69-4229t\" (UID: \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\") " pod="openstack/dnsmasq-dns-8448dbfc69-4229t" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.295646 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-config\") pod \"dnsmasq-dns-8448dbfc69-4229t\" (UID: \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\") " pod="openstack/dnsmasq-dns-8448dbfc69-4229t" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.295705 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/211c7770-64fe-4943-becb-bc02113fd867-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"211c7770-64fe-4943-becb-bc02113fd867\") " pod="openstack/cinder-scheduler-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.425546 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-ovsdbserver-nb\") pod \"dnsmasq-dns-8448dbfc69-4229t\" (UID: \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\") " pod="openstack/dnsmasq-dns-8448dbfc69-4229t" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.425661 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/211c7770-64fe-4943-becb-bc02113fd867-scripts\") pod \"cinder-scheduler-0\" (UID: \"211c7770-64fe-4943-becb-bc02113fd867\") " pod="openstack/cinder-scheduler-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.425747 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-dns-swift-storage-0\") pod \"dnsmasq-dns-8448dbfc69-4229t\" (UID: \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\") " pod="openstack/dnsmasq-dns-8448dbfc69-4229t" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.425785 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/211c7770-64fe-4943-becb-bc02113fd867-config-data\") pod \"cinder-scheduler-0\" (UID: \"211c7770-64fe-4943-becb-bc02113fd867\") " pod="openstack/cinder-scheduler-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.425821 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/211c7770-64fe-4943-becb-bc02113fd867-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"211c7770-64fe-4943-becb-bc02113fd867\") " pod="openstack/cinder-scheduler-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.425857 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srhzx\" (UniqueName: \"kubernetes.io/projected/211c7770-64fe-4943-becb-bc02113fd867-kube-api-access-srhzx\") pod \"cinder-scheduler-0\" (UID: \"211c7770-64fe-4943-becb-bc02113fd867\") " pod="openstack/cinder-scheduler-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.425886 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vz6\" (UniqueName: \"kubernetes.io/projected/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-kube-api-access-d9vz6\") pod \"dnsmasq-dns-8448dbfc69-4229t\" (UID: \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\") " pod="openstack/dnsmasq-dns-8448dbfc69-4229t" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.425914 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-ovsdbserver-sb\") pod \"dnsmasq-dns-8448dbfc69-4229t\" (UID: \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\") " pod="openstack/dnsmasq-dns-8448dbfc69-4229t" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.425989 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/211c7770-64fe-4943-becb-bc02113fd867-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"211c7770-64fe-4943-becb-bc02113fd867\") " pod="openstack/cinder-scheduler-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.426027 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-dns-svc\") pod \"dnsmasq-dns-8448dbfc69-4229t\" (UID: \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\") " pod="openstack/dnsmasq-dns-8448dbfc69-4229t" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.426055 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-config\") pod \"dnsmasq-dns-8448dbfc69-4229t\" (UID: \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\") " pod="openstack/dnsmasq-dns-8448dbfc69-4229t" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.426147 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/211c7770-64fe-4943-becb-bc02113fd867-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"211c7770-64fe-4943-becb-bc02113fd867\") " pod="openstack/cinder-scheduler-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.431645 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/211c7770-64fe-4943-becb-bc02113fd867-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"211c7770-64fe-4943-becb-bc02113fd867\") " pod="openstack/cinder-scheduler-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.432966 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-ovsdbserver-nb\") pod \"dnsmasq-dns-8448dbfc69-4229t\" (UID: \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\") " pod="openstack/dnsmasq-dns-8448dbfc69-4229t" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.435195 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6858c8d8f6-k4smz"] Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.435563 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-dns-swift-storage-0\") pod \"dnsmasq-dns-8448dbfc69-4229t\" (UID: \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\") " pod="openstack/dnsmasq-dns-8448dbfc69-4229t" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.440267 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-ovsdbserver-sb\") pod \"dnsmasq-dns-8448dbfc69-4229t\" (UID: \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\") " pod="openstack/dnsmasq-dns-8448dbfc69-4229t" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.442440 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-dns-svc\") pod \"dnsmasq-dns-8448dbfc69-4229t\" (UID: \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\") " pod="openstack/dnsmasq-dns-8448dbfc69-4229t" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.444863 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-config\") pod \"dnsmasq-dns-8448dbfc69-4229t\" (UID: \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\") " pod="openstack/dnsmasq-dns-8448dbfc69-4229t" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.447790 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6858c8d8f6-k4smz" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.452553 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.452746 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.452862 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.453037 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-c2wgv" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.498605 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/211c7770-64fe-4943-becb-bc02113fd867-config-data\") pod \"cinder-scheduler-0\" (UID: \"211c7770-64fe-4943-becb-bc02113fd867\") " pod="openstack/cinder-scheduler-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.499867 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/211c7770-64fe-4943-becb-bc02113fd867-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"211c7770-64fe-4943-becb-bc02113fd867\") " pod="openstack/cinder-scheduler-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.500540 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srhzx\" (UniqueName: \"kubernetes.io/projected/211c7770-64fe-4943-becb-bc02113fd867-kube-api-access-srhzx\") pod \"cinder-scheduler-0\" (UID: \"211c7770-64fe-4943-becb-bc02113fd867\") " pod="openstack/cinder-scheduler-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.501300 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8448dbfc69-4229t"] Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.501412 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/211c7770-64fe-4943-becb-bc02113fd867-scripts\") pod \"cinder-scheduler-0\" (UID: \"211c7770-64fe-4943-becb-bc02113fd867\") " pod="openstack/cinder-scheduler-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.502802 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/211c7770-64fe-4943-becb-bc02113fd867-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"211c7770-64fe-4943-becb-bc02113fd867\") " pod="openstack/cinder-scheduler-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.513609 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9vz6\" (UniqueName: \"kubernetes.io/projected/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-kube-api-access-d9vz6\") pod \"dnsmasq-dns-8448dbfc69-4229t\" (UID: \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\") " pod="openstack/dnsmasq-dns-8448dbfc69-4229t" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.513764 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6858c8d8f6-k4smz"] Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.524202 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76cb94d47c-txmh6"] Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.526110 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.566001 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8448dbfc69-4229t" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.574722 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76cb94d47c-txmh6"] Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.574766 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.580198 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.588639 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.588901 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.658910 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-dns-svc\") pod \"dnsmasq-dns-76cb94d47c-txmh6\" (UID: \"ff335b2a-909a-4c39-a045-2267c73ac8b2\") " pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.658978 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-ovsdbserver-sb\") pod \"dnsmasq-dns-76cb94d47c-txmh6\" (UID: \"ff335b2a-909a-4c39-a045-2267c73ac8b2\") " pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.659000 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7njgl\" (UniqueName: \"kubernetes.io/projected/ff335b2a-909a-4c39-a045-2267c73ac8b2-kube-api-access-7njgl\") pod \"dnsmasq-dns-76cb94d47c-txmh6\" (UID: \"ff335b2a-909a-4c39-a045-2267c73ac8b2\") " pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.659141 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ed9fad7-284f-40b4-9c3b-7a213aff010a-combined-ca-bundle\") pod \"neutron-6858c8d8f6-k4smz\" (UID: \"4ed9fad7-284f-40b4-9c3b-7a213aff010a\") " pod="openstack/neutron-6858c8d8f6-k4smz" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.660942 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ed9fad7-284f-40b4-9c3b-7a213aff010a-ovndb-tls-certs\") pod \"neutron-6858c8d8f6-k4smz\" (UID: \"4ed9fad7-284f-40b4-9c3b-7a213aff010a\") " pod="openstack/neutron-6858c8d8f6-k4smz" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.660990 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4ed9fad7-284f-40b4-9c3b-7a213aff010a-httpd-config\") pod \"neutron-6858c8d8f6-k4smz\" (UID: \"4ed9fad7-284f-40b4-9c3b-7a213aff010a\") " pod="openstack/neutron-6858c8d8f6-k4smz" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.661078 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-ovsdbserver-nb\") pod \"dnsmasq-dns-76cb94d47c-txmh6\" (UID: \"ff335b2a-909a-4c39-a045-2267c73ac8b2\") " pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.661115 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-config\") pod \"dnsmasq-dns-76cb94d47c-txmh6\" (UID: \"ff335b2a-909a-4c39-a045-2267c73ac8b2\") " pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.661194 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-dns-swift-storage-0\") pod \"dnsmasq-dns-76cb94d47c-txmh6\" (UID: \"ff335b2a-909a-4c39-a045-2267c73ac8b2\") " pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.661264 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97ld6\" (UniqueName: \"kubernetes.io/projected/4ed9fad7-284f-40b4-9c3b-7a213aff010a-kube-api-access-97ld6\") pod \"neutron-6858c8d8f6-k4smz\" (UID: \"4ed9fad7-284f-40b4-9c3b-7a213aff010a\") " pod="openstack/neutron-6858c8d8f6-k4smz" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.661322 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ed9fad7-284f-40b4-9c3b-7a213aff010a-config\") pod \"neutron-6858c8d8f6-k4smz\" (UID: \"4ed9fad7-284f-40b4-9c3b-7a213aff010a\") " pod="openstack/neutron-6858c8d8f6-k4smz" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.707938 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.763852 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/553d73e1-f14e-4379-b630-38e440eedb73-etc-machine-id\") pod \"cinder-api-0\" (UID: \"553d73e1-f14e-4379-b630-38e440eedb73\") " pod="openstack/cinder-api-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.763914 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-ovsdbserver-nb\") pod \"dnsmasq-dns-76cb94d47c-txmh6\" (UID: \"ff335b2a-909a-4c39-a045-2267c73ac8b2\") " pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.763940 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-config\") pod \"dnsmasq-dns-76cb94d47c-txmh6\" (UID: \"ff335b2a-909a-4c39-a045-2267c73ac8b2\") " pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.763957 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/553d73e1-f14e-4379-b630-38e440eedb73-config-data\") pod \"cinder-api-0\" (UID: \"553d73e1-f14e-4379-b630-38e440eedb73\") " pod="openstack/cinder-api-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.763982 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/553d73e1-f14e-4379-b630-38e440eedb73-logs\") pod \"cinder-api-0\" (UID: \"553d73e1-f14e-4379-b630-38e440eedb73\") " pod="openstack/cinder-api-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.764013 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-dns-swift-storage-0\") pod \"dnsmasq-dns-76cb94d47c-txmh6\" (UID: \"ff335b2a-909a-4c39-a045-2267c73ac8b2\") " pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.764028 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/553d73e1-f14e-4379-b630-38e440eedb73-config-data-custom\") pod \"cinder-api-0\" (UID: \"553d73e1-f14e-4379-b630-38e440eedb73\") " pod="openstack/cinder-api-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.764059 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97ld6\" (UniqueName: \"kubernetes.io/projected/4ed9fad7-284f-40b4-9c3b-7a213aff010a-kube-api-access-97ld6\") pod \"neutron-6858c8d8f6-k4smz\" (UID: \"4ed9fad7-284f-40b4-9c3b-7a213aff010a\") " pod="openstack/neutron-6858c8d8f6-k4smz" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.764085 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ed9fad7-284f-40b4-9c3b-7a213aff010a-config\") pod \"neutron-6858c8d8f6-k4smz\" (UID: \"4ed9fad7-284f-40b4-9c3b-7a213aff010a\") " pod="openstack/neutron-6858c8d8f6-k4smz" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.764101 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/553d73e1-f14e-4379-b630-38e440eedb73-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"553d73e1-f14e-4379-b630-38e440eedb73\") " pod="openstack/cinder-api-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.764120 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/553d73e1-f14e-4379-b630-38e440eedb73-scripts\") pod \"cinder-api-0\" (UID: \"553d73e1-f14e-4379-b630-38e440eedb73\") " pod="openstack/cinder-api-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.764167 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-dns-svc\") pod \"dnsmasq-dns-76cb94d47c-txmh6\" (UID: \"ff335b2a-909a-4c39-a045-2267c73ac8b2\") " pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.764198 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-ovsdbserver-sb\") pod \"dnsmasq-dns-76cb94d47c-txmh6\" (UID: \"ff335b2a-909a-4c39-a045-2267c73ac8b2\") " pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.764215 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7njgl\" (UniqueName: \"kubernetes.io/projected/ff335b2a-909a-4c39-a045-2267c73ac8b2-kube-api-access-7njgl\") pod \"dnsmasq-dns-76cb94d47c-txmh6\" (UID: \"ff335b2a-909a-4c39-a045-2267c73ac8b2\") " pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.764231 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ed9fad7-284f-40b4-9c3b-7a213aff010a-combined-ca-bundle\") pod \"neutron-6858c8d8f6-k4smz\" (UID: \"4ed9fad7-284f-40b4-9c3b-7a213aff010a\") " pod="openstack/neutron-6858c8d8f6-k4smz" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.764273 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptrl6\" (UniqueName: \"kubernetes.io/projected/553d73e1-f14e-4379-b630-38e440eedb73-kube-api-access-ptrl6\") pod \"cinder-api-0\" (UID: \"553d73e1-f14e-4379-b630-38e440eedb73\") " pod="openstack/cinder-api-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.764290 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ed9fad7-284f-40b4-9c3b-7a213aff010a-ovndb-tls-certs\") pod \"neutron-6858c8d8f6-k4smz\" (UID: \"4ed9fad7-284f-40b4-9c3b-7a213aff010a\") " pod="openstack/neutron-6858c8d8f6-k4smz" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.764307 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4ed9fad7-284f-40b4-9c3b-7a213aff010a-httpd-config\") pod \"neutron-6858c8d8f6-k4smz\" (UID: \"4ed9fad7-284f-40b4-9c3b-7a213aff010a\") " pod="openstack/neutron-6858c8d8f6-k4smz" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.765709 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-dns-swift-storage-0\") pod \"dnsmasq-dns-76cb94d47c-txmh6\" (UID: \"ff335b2a-909a-4c39-a045-2267c73ac8b2\") " pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.766074 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-dns-svc\") pod \"dnsmasq-dns-76cb94d47c-txmh6\" (UID: \"ff335b2a-909a-4c39-a045-2267c73ac8b2\") " pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.766973 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-ovsdbserver-nb\") pod \"dnsmasq-dns-76cb94d47c-txmh6\" (UID: \"ff335b2a-909a-4c39-a045-2267c73ac8b2\") " pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.767632 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-ovsdbserver-sb\") pod \"dnsmasq-dns-76cb94d47c-txmh6\" (UID: \"ff335b2a-909a-4c39-a045-2267c73ac8b2\") " pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.770315 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ed9fad7-284f-40b4-9c3b-7a213aff010a-config\") pod \"neutron-6858c8d8f6-k4smz\" (UID: \"4ed9fad7-284f-40b4-9c3b-7a213aff010a\") " pod="openstack/neutron-6858c8d8f6-k4smz" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.771927 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ed9fad7-284f-40b4-9c3b-7a213aff010a-ovndb-tls-certs\") pod \"neutron-6858c8d8f6-k4smz\" (UID: \"4ed9fad7-284f-40b4-9c3b-7a213aff010a\") " pod="openstack/neutron-6858c8d8f6-k4smz" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.772438 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-config\") pod \"dnsmasq-dns-76cb94d47c-txmh6\" (UID: \"ff335b2a-909a-4c39-a045-2267c73ac8b2\") " pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.774788 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4ed9fad7-284f-40b4-9c3b-7a213aff010a-httpd-config\") pod \"neutron-6858c8d8f6-k4smz\" (UID: \"4ed9fad7-284f-40b4-9c3b-7a213aff010a\") " pod="openstack/neutron-6858c8d8f6-k4smz" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.779030 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ed9fad7-284f-40b4-9c3b-7a213aff010a-combined-ca-bundle\") pod \"neutron-6858c8d8f6-k4smz\" (UID: \"4ed9fad7-284f-40b4-9c3b-7a213aff010a\") " pod="openstack/neutron-6858c8d8f6-k4smz" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.788735 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7njgl\" (UniqueName: \"kubernetes.io/projected/ff335b2a-909a-4c39-a045-2267c73ac8b2-kube-api-access-7njgl\") pod \"dnsmasq-dns-76cb94d47c-txmh6\" (UID: \"ff335b2a-909a-4c39-a045-2267c73ac8b2\") " pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.788931 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97ld6\" (UniqueName: \"kubernetes.io/projected/4ed9fad7-284f-40b4-9c3b-7a213aff010a-kube-api-access-97ld6\") pod \"neutron-6858c8d8f6-k4smz\" (UID: \"4ed9fad7-284f-40b4-9c3b-7a213aff010a\") " pod="openstack/neutron-6858c8d8f6-k4smz" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.867924 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/553d73e1-f14e-4379-b630-38e440eedb73-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"553d73e1-f14e-4379-b630-38e440eedb73\") " pod="openstack/cinder-api-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.869049 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/553d73e1-f14e-4379-b630-38e440eedb73-scripts\") pod \"cinder-api-0\" (UID: \"553d73e1-f14e-4379-b630-38e440eedb73\") " pod="openstack/cinder-api-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.869211 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptrl6\" (UniqueName: \"kubernetes.io/projected/553d73e1-f14e-4379-b630-38e440eedb73-kube-api-access-ptrl6\") pod \"cinder-api-0\" (UID: \"553d73e1-f14e-4379-b630-38e440eedb73\") " pod="openstack/cinder-api-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.869660 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/553d73e1-f14e-4379-b630-38e440eedb73-etc-machine-id\") pod \"cinder-api-0\" (UID: \"553d73e1-f14e-4379-b630-38e440eedb73\") " pod="openstack/cinder-api-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.869740 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/553d73e1-f14e-4379-b630-38e440eedb73-config-data\") pod \"cinder-api-0\" (UID: \"553d73e1-f14e-4379-b630-38e440eedb73\") " pod="openstack/cinder-api-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.869774 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/553d73e1-f14e-4379-b630-38e440eedb73-logs\") pod \"cinder-api-0\" (UID: \"553d73e1-f14e-4379-b630-38e440eedb73\") " pod="openstack/cinder-api-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.869821 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/553d73e1-f14e-4379-b630-38e440eedb73-config-data-custom\") pod \"cinder-api-0\" (UID: \"553d73e1-f14e-4379-b630-38e440eedb73\") " pod="openstack/cinder-api-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.870166 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/553d73e1-f14e-4379-b630-38e440eedb73-etc-machine-id\") pod \"cinder-api-0\" (UID: \"553d73e1-f14e-4379-b630-38e440eedb73\") " pod="openstack/cinder-api-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.875926 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/553d73e1-f14e-4379-b630-38e440eedb73-logs\") pod \"cinder-api-0\" (UID: \"553d73e1-f14e-4379-b630-38e440eedb73\") " pod="openstack/cinder-api-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.919311 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6858c8d8f6-k4smz" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.923721 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/553d73e1-f14e-4379-b630-38e440eedb73-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"553d73e1-f14e-4379-b630-38e440eedb73\") " pod="openstack/cinder-api-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.923850 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/553d73e1-f14e-4379-b630-38e440eedb73-config-data\") pod \"cinder-api-0\" (UID: \"553d73e1-f14e-4379-b630-38e440eedb73\") " pod="openstack/cinder-api-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.924037 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/553d73e1-f14e-4379-b630-38e440eedb73-scripts\") pod \"cinder-api-0\" (UID: \"553d73e1-f14e-4379-b630-38e440eedb73\") " pod="openstack/cinder-api-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.924437 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/553d73e1-f14e-4379-b630-38e440eedb73-config-data-custom\") pod \"cinder-api-0\" (UID: \"553d73e1-f14e-4379-b630-38e440eedb73\") " pod="openstack/cinder-api-0" Mar 20 16:00:45 crc kubenswrapper[4730]: I0320 16:00:45.932779 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptrl6\" (UniqueName: \"kubernetes.io/projected/553d73e1-f14e-4379-b630-38e440eedb73-kube-api-access-ptrl6\") pod \"cinder-api-0\" (UID: \"553d73e1-f14e-4379-b630-38e440eedb73\") " pod="openstack/cinder-api-0" Mar 20 16:00:46 crc kubenswrapper[4730]: I0320 16:00:46.091059 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" Mar 20 16:00:46 crc kubenswrapper[4730]: I0320 16:00:46.092069 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 16:00:46 crc kubenswrapper[4730]: I0320 16:00:46.168104 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-66f7c676c8-wdfnw"] Mar 20 16:00:46 crc kubenswrapper[4730]: I0320 16:00:46.204713 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-86bc9f54b4-6szxq"] Mar 20 16:00:46 crc kubenswrapper[4730]: W0320 16:00:46.227683 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod625a25c3_e585_4848_bbe1_0bdd4be731a9.slice/crio-4051b918d71242b8ca2e41190a654e8dbaba0ff078f4749ad9adbb01f4288ed1 WatchSource:0}: Error finding container 4051b918d71242b8ca2e41190a654e8dbaba0ff078f4749ad9adbb01f4288ed1: Status 404 returned error can't find the container with id 4051b918d71242b8ca2e41190a654e8dbaba0ff078f4749ad9adbb01f4288ed1 Mar 20 16:00:46 crc kubenswrapper[4730]: W0320 16:00:46.276312 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4dfee88_47ff_4e8b_9f46_60cc17fb0080.slice/crio-8090a449167023dbdca53532fb75f2e14e0fdd389666939cc9b4e5a2d48a7c02 WatchSource:0}: Error finding container 8090a449167023dbdca53532fb75f2e14e0fdd389666939cc9b4e5a2d48a7c02: Status 404 returned error can't find the container with id 8090a449167023dbdca53532fb75f2e14e0fdd389666939cc9b4e5a2d48a7c02 Mar 20 16:00:46 crc kubenswrapper[4730]: I0320 16:00:46.410450 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86bc9f54b4-6szxq" event={"ID":"e4dfee88-47ff-4e8b-9f46-60cc17fb0080","Type":"ContainerStarted","Data":"8090a449167023dbdca53532fb75f2e14e0fdd389666939cc9b4e5a2d48a7c02"} Mar 20 16:00:46 crc kubenswrapper[4730]: I0320 16:00:46.442179 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66f7c676c8-wdfnw" event={"ID":"625a25c3-e585-4848-bbe1-0bdd4be731a9","Type":"ContainerStarted","Data":"4051b918d71242b8ca2e41190a654e8dbaba0ff078f4749ad9adbb01f4288ed1"} Mar 20 16:00:46 crc kubenswrapper[4730]: I0320 16:00:46.458741 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Mar 20 16:00:46 crc kubenswrapper[4730]: I0320 16:00:46.514358 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"223c97f9-0680-47b8-bc2e-1c914296d29e","Type":"ContainerStarted","Data":"af9edb57a02832528fefe2d244021643694b240c8792bf0b8e54f60c03a47e8d"} Mar 20 16:00:46 crc kubenswrapper[4730]: I0320 16:00:46.514526 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="223c97f9-0680-47b8-bc2e-1c914296d29e" containerName="ceilometer-central-agent" containerID="cri-o://d6846337441238e0631dc47666643f2d85b6c9e548d29144f9972ff195d4dc1e" gracePeriod=30 Mar 20 16:00:46 crc kubenswrapper[4730]: I0320 16:00:46.515033 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 16:00:46 crc kubenswrapper[4730]: I0320 16:00:46.515319 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="223c97f9-0680-47b8-bc2e-1c914296d29e" containerName="proxy-httpd" containerID="cri-o://af9edb57a02832528fefe2d244021643694b240c8792bf0b8e54f60c03a47e8d" gracePeriod=30 Mar 20 16:00:46 crc kubenswrapper[4730]: I0320 16:00:46.515372 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="223c97f9-0680-47b8-bc2e-1c914296d29e" containerName="sg-core" containerID="cri-o://71df4b6d20c02608180532f04e26895e3f9ad0248e3128f02a3d2c457f5baa48" gracePeriod=30 Mar 20 16:00:46 crc kubenswrapper[4730]: I0320 16:00:46.515406 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="223c97f9-0680-47b8-bc2e-1c914296d29e" containerName="ceilometer-notification-agent" containerID="cri-o://f29767ca3e9d6cdef0508a609251241ac12e93823a37b6304da4f130070ee420" gracePeriod=30 Mar 20 16:00:46 crc kubenswrapper[4730]: I0320 16:00:46.549691 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3f6c808e-d523-48bd-8ec2-28b625834317","Type":"ContainerStarted","Data":"22c1fa447a9712d22a3477c2b5b4f81ffbfd58601afde3e8b272d15c3b1ac1ce"} Mar 20 16:00:46 crc kubenswrapper[4730]: I0320 16:00:46.578481 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.086430644 podStartE2EDuration="47.57826427s" podCreationTimestamp="2026-03-20 15:59:59 +0000 UTC" firstStartedPulling="2026-03-20 16:00:01.452832941 +0000 UTC m=+1260.666204310" lastFinishedPulling="2026-03-20 16:00:44.944666567 +0000 UTC m=+1304.158037936" observedRunningTime="2026-03-20 16:00:46.545743253 +0000 UTC m=+1305.759114642" watchObservedRunningTime="2026-03-20 16:00:46.57826427 +0000 UTC m=+1305.791635639" Mar 20 16:00:46 crc kubenswrapper[4730]: I0320 16:00:46.637902 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-86947bcbc8-94hl8"] Mar 20 16:00:46 crc kubenswrapper[4730]: I0320 16:00:46.648220 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dccc4d8b9-qqxcj"] Mar 20 16:00:46 crc kubenswrapper[4730]: I0320 16:00:46.658480 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-54b9958865-vn9kj"] Mar 20 16:00:46 crc kubenswrapper[4730]: I0320 16:00:46.718952 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8448dbfc69-4229t"] Mar 20 16:00:46 crc kubenswrapper[4730]: I0320 16:00:46.765126 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 16:00:47 crc kubenswrapper[4730]: I0320 16:00:47.049659 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6858c8d8f6-k4smz"] Mar 20 16:00:47 crc kubenswrapper[4730]: I0320 16:00:47.197893 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 20 16:00:47 crc kubenswrapper[4730]: I0320 16:00:47.231007 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76cb94d47c-txmh6"] Mar 20 16:00:47 crc kubenswrapper[4730]: W0320 16:00:47.377642 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod553d73e1_f14e_4379_b630_38e440eedb73.slice/crio-fa390ac4060d1a6d7727f5c45c81da714c4fb46717c6a3810b619cf41b677f6f WatchSource:0}: Error finding container fa390ac4060d1a6d7727f5c45c81da714c4fb46717c6a3810b619cf41b677f6f: Status 404 returned error can't find the container with id fa390ac4060d1a6d7727f5c45c81da714c4fb46717c6a3810b619cf41b677f6f Mar 20 16:00:47 crc kubenswrapper[4730]: W0320 16:00:47.390452 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff335b2a_909a_4c39_a045_2267c73ac8b2.slice/crio-bb5085b7efdc43fccc1330db4f8eeb5dbaa633cfdd11172926159e432c750243 WatchSource:0}: Error finding container bb5085b7efdc43fccc1330db4f8eeb5dbaa633cfdd11172926159e432c750243: Status 404 returned error can't find the container with id bb5085b7efdc43fccc1330db4f8eeb5dbaa633cfdd11172926159e432c750243 Mar 20 16:00:47 crc kubenswrapper[4730]: I0320 16:00:47.594794 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66f7c676c8-wdfnw" event={"ID":"625a25c3-e585-4848-bbe1-0bdd4be731a9","Type":"ContainerStarted","Data":"36d68edd252a5f941948a382970d79860f0b64a17a4a5f197bbb8ab68f618711"} Mar 20 16:00:47 crc kubenswrapper[4730]: I0320 16:00:47.594878 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66f7c676c8-wdfnw" event={"ID":"625a25c3-e585-4848-bbe1-0bdd4be731a9","Type":"ContainerStarted","Data":"4b1a913b558b33eb77f5f4bce1a49632877936d63ae7264946c7f559a4317add"} Mar 20 16:00:47 crc kubenswrapper[4730]: I0320 16:00:47.594961 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-66f7c676c8-wdfnw" Mar 20 16:00:47 crc kubenswrapper[4730]: I0320 16:00:47.597546 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"553d73e1-f14e-4379-b630-38e440eedb73","Type":"ContainerStarted","Data":"fa390ac4060d1a6d7727f5c45c81da714c4fb46717c6a3810b619cf41b677f6f"} Mar 20 16:00:47 crc kubenswrapper[4730]: I0320 16:00:47.604463 4730 generic.go:334] "Generic (PLEG): container finished" podID="fc765c5c-7def-4230-b500-d6410c2da475" containerID="bb60bf8d45c378ac22831a9107db723f03064f274377daf0138584d49caa72c4" exitCode=0 Mar 20 16:00:47 crc kubenswrapper[4730]: I0320 16:00:47.604605 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dccc4d8b9-qqxcj" event={"ID":"fc765c5c-7def-4230-b500-d6410c2da475","Type":"ContainerDied","Data":"bb60bf8d45c378ac22831a9107db723f03064f274377daf0138584d49caa72c4"} Mar 20 16:00:47 crc kubenswrapper[4730]: I0320 16:00:47.604645 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dccc4d8b9-qqxcj" event={"ID":"fc765c5c-7def-4230-b500-d6410c2da475","Type":"ContainerStarted","Data":"0e83f3546b83e55e3c56c1cd51e6231078002160e2def174a384d53f10d45966"} Mar 20 16:00:47 crc kubenswrapper[4730]: I0320 16:00:47.608214 4730 generic.go:334] "Generic (PLEG): container finished" podID="223c97f9-0680-47b8-bc2e-1c914296d29e" containerID="af9edb57a02832528fefe2d244021643694b240c8792bf0b8e54f60c03a47e8d" exitCode=0 Mar 20 16:00:47 crc kubenswrapper[4730]: I0320 16:00:47.608234 4730 generic.go:334] "Generic (PLEG): container finished" podID="223c97f9-0680-47b8-bc2e-1c914296d29e" containerID="71df4b6d20c02608180532f04e26895e3f9ad0248e3128f02a3d2c457f5baa48" exitCode=2 Mar 20 16:00:47 crc kubenswrapper[4730]: I0320 16:00:47.608263 4730 generic.go:334] "Generic (PLEG): container finished" podID="223c97f9-0680-47b8-bc2e-1c914296d29e" containerID="d6846337441238e0631dc47666643f2d85b6c9e548d29144f9972ff195d4dc1e" exitCode=0 Mar 20 16:00:47 crc kubenswrapper[4730]: I0320 16:00:47.608300 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"223c97f9-0680-47b8-bc2e-1c914296d29e","Type":"ContainerDied","Data":"af9edb57a02832528fefe2d244021643694b240c8792bf0b8e54f60c03a47e8d"} Mar 20 16:00:47 crc kubenswrapper[4730]: I0320 16:00:47.608320 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"223c97f9-0680-47b8-bc2e-1c914296d29e","Type":"ContainerDied","Data":"71df4b6d20c02608180532f04e26895e3f9ad0248e3128f02a3d2c457f5baa48"} Mar 20 16:00:47 crc kubenswrapper[4730]: I0320 16:00:47.608354 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"223c97f9-0680-47b8-bc2e-1c914296d29e","Type":"ContainerDied","Data":"d6846337441238e0631dc47666643f2d85b6c9e548d29144f9972ff195d4dc1e"} Mar 20 16:00:47 crc kubenswrapper[4730]: I0320 16:00:47.610146 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-54b9958865-vn9kj" event={"ID":"8f8c40f6-c8d3-4c8c-97eb-643d32774174","Type":"ContainerStarted","Data":"68ccd644408abe10bbe53729b9fa0385e109edeb36678c4555b35581a8c4faae"} Mar 20 16:00:47 crc kubenswrapper[4730]: I0320 16:00:47.612532 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86947bcbc8-94hl8" event={"ID":"59460a49-c9fe-46c9-b898-d08234ca7cd3","Type":"ContainerStarted","Data":"f6b19889ce7f10111d0868f7ac900efa99cdeb7647b6d1a5d5561cbe20437ebf"} Mar 20 16:00:47 crc kubenswrapper[4730]: I0320 16:00:47.612557 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86947bcbc8-94hl8" event={"ID":"59460a49-c9fe-46c9-b898-d08234ca7cd3","Type":"ContainerStarted","Data":"dda643c0e52a6ae559ac8082cc3cd0f70fc2700317b9248464038bda0b48cc5d"} Mar 20 16:00:47 crc kubenswrapper[4730]: I0320 16:00:47.614290 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"aabd3bd6-2cee-47b8-9174-ad9ea1415e82","Type":"ContainerStarted","Data":"b9554ee31e1f63a199256bec2bb1d2359bd31822a3145d6062ef5270512d5078"} Mar 20 16:00:47 crc kubenswrapper[4730]: I0320 16:00:47.614317 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"aabd3bd6-2cee-47b8-9174-ad9ea1415e82","Type":"ContainerStarted","Data":"6d15d2ce82d18fec66394661182780c55d2eb065e0325660570454876023436b"} Mar 20 16:00:47 crc kubenswrapper[4730]: I0320 16:00:47.624462 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8448dbfc69-4229t" event={"ID":"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d","Type":"ContainerStarted","Data":"52f7b95fb474271e8f80dffb24f266c810b758a2d4678f7825703e2faf3416e4"} Mar 20 16:00:47 crc kubenswrapper[4730]: I0320 16:00:47.629197 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-66f7c676c8-wdfnw" podStartSLOduration=10.629177152 podStartE2EDuration="10.629177152s" podCreationTimestamp="2026-03-20 16:00:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:00:47.619625278 +0000 UTC m=+1306.832996647" watchObservedRunningTime="2026-03-20 16:00:47.629177152 +0000 UTC m=+1306.842548521" Mar 20 16:00:47 crc kubenswrapper[4730]: I0320 16:00:47.633818 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6858c8d8f6-k4smz" event={"ID":"4ed9fad7-284f-40b4-9c3b-7a213aff010a","Type":"ContainerStarted","Data":"52473fa5ebf9585a5de42b7ba1a1ed907405640eabed30fdba7a035924a392d0"} Mar 20 16:00:47 crc kubenswrapper[4730]: I0320 16:00:47.638395 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"211c7770-64fe-4943-becb-bc02113fd867","Type":"ContainerStarted","Data":"4e67bee5c04834a723f337ad60eefc215ef11d3a4fa1f8a1f937528124d8a7cd"} Mar 20 16:00:47 crc kubenswrapper[4730]: I0320 16:00:47.640378 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" event={"ID":"ff335b2a-909a-4c39-a045-2267c73ac8b2","Type":"ContainerStarted","Data":"bb5085b7efdc43fccc1330db4f8eeb5dbaa633cfdd11172926159e432c750243"} Mar 20 16:00:47 crc kubenswrapper[4730]: I0320 16:00:47.962509 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 20 16:00:48 crc kubenswrapper[4730]: I0320 16:00:48.154367 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-66f7c676c8-wdfnw" Mar 20 16:00:48 crc kubenswrapper[4730]: I0320 16:00:48.682070 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86947bcbc8-94hl8" event={"ID":"59460a49-c9fe-46c9-b898-d08234ca7cd3","Type":"ContainerStarted","Data":"f7ff0c62dabe1626e435ccd5a9eff6848b2da9515e76484d89f62a803fa4c66a"} Mar 20 16:00:48 crc kubenswrapper[4730]: I0320 16:00:48.683866 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-86947bcbc8-94hl8" Mar 20 16:00:48 crc kubenswrapper[4730]: I0320 16:00:48.683907 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-86947bcbc8-94hl8" Mar 20 16:00:48 crc kubenswrapper[4730]: I0320 16:00:48.698303 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"aabd3bd6-2cee-47b8-9174-ad9ea1415e82","Type":"ContainerStarted","Data":"1b54d6706fe3b9e0df5e79e3bf8df496baa00381f4d47713ff0c38d0b131d0ef"} Mar 20 16:00:48 crc kubenswrapper[4730]: I0320 16:00:48.704237 4730 generic.go:334] "Generic (PLEG): container finished" podID="94a8baf5-acb2-4f6d-811c-335fd3bd7b0d" containerID="350035d4ab56eddb1da4280110af09cc41d7fc179ba6730fdfd0cf3c66bc48f4" exitCode=0 Mar 20 16:00:48 crc kubenswrapper[4730]: I0320 16:00:48.704440 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8448dbfc69-4229t" event={"ID":"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d","Type":"ContainerDied","Data":"350035d4ab56eddb1da4280110af09cc41d7fc179ba6730fdfd0cf3c66bc48f4"} Mar 20 16:00:48 crc kubenswrapper[4730]: I0320 16:00:48.711520 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Mar 20 16:00:48 crc kubenswrapper[4730]: I0320 16:00:48.720122 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-86947bcbc8-94hl8" podStartSLOduration=8.720088459 podStartE2EDuration="8.720088459s" podCreationTimestamp="2026-03-20 16:00:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:00:48.708422148 +0000 UTC m=+1307.921793517" watchObservedRunningTime="2026-03-20 16:00:48.720088459 +0000 UTC m=+1307.933459828" Mar 20 16:00:48 crc kubenswrapper[4730]: I0320 16:00:48.722849 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6858c8d8f6-k4smz" event={"ID":"4ed9fad7-284f-40b4-9c3b-7a213aff010a","Type":"ContainerStarted","Data":"2eff1617c29a34da6021d776b5bc5c6695025819add4af253986837526af0f15"} Mar 20 16:00:48 crc kubenswrapper[4730]: I0320 16:00:48.761532 4730 generic.go:334] "Generic (PLEG): container finished" podID="ff335b2a-909a-4c39-a045-2267c73ac8b2" containerID="e17867e23320294cb0b9160d421d696c692c189cb8d85e17cafc60d15c897466" exitCode=0 Mar 20 16:00:48 crc kubenswrapper[4730]: I0320 16:00:48.763284 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" event={"ID":"ff335b2a-909a-4c39-a045-2267c73ac8b2","Type":"ContainerDied","Data":"e17867e23320294cb0b9160d421d696c692c189cb8d85e17cafc60d15c897466"} Mar 20 16:00:48 crc kubenswrapper[4730]: I0320 16:00:48.771890 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=12.771868897000001 podStartE2EDuration="12.771868897s" podCreationTimestamp="2026-03-20 16:00:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:00:48.764991773 +0000 UTC m=+1307.978363142" watchObservedRunningTime="2026-03-20 16:00:48.771868897 +0000 UTC m=+1307.985240266" Mar 20 16:00:49 crc kubenswrapper[4730]: I0320 16:00:49.974264 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.056076 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.104211 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dccc4d8b9-qqxcj" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.109565 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8448dbfc69-4229t" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.291432 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-ovsdbserver-nb\") pod \"fc765c5c-7def-4230-b500-d6410c2da475\" (UID: \"fc765c5c-7def-4230-b500-d6410c2da475\") " Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.291529 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-dns-swift-storage-0\") pod \"fc765c5c-7def-4230-b500-d6410c2da475\" (UID: \"fc765c5c-7def-4230-b500-d6410c2da475\") " Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.291793 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-ovsdbserver-sb\") pod \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\" (UID: \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\") " Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.291862 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-ovsdbserver-sb\") pod \"fc765c5c-7def-4230-b500-d6410c2da475\" (UID: \"fc765c5c-7def-4230-b500-d6410c2da475\") " Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.291891 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-dns-svc\") pod \"fc765c5c-7def-4230-b500-d6410c2da475\" (UID: \"fc765c5c-7def-4230-b500-d6410c2da475\") " Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.291940 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-config\") pod \"fc765c5c-7def-4230-b500-d6410c2da475\" (UID: \"fc765c5c-7def-4230-b500-d6410c2da475\") " Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.291982 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-ovsdbserver-nb\") pod \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\" (UID: \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\") " Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.292058 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9vz6\" (UniqueName: \"kubernetes.io/projected/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-kube-api-access-d9vz6\") pod \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\" (UID: \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\") " Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.292173 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9k6g\" (UniqueName: \"kubernetes.io/projected/fc765c5c-7def-4230-b500-d6410c2da475-kube-api-access-q9k6g\") pod \"fc765c5c-7def-4230-b500-d6410c2da475\" (UID: \"fc765c5c-7def-4230-b500-d6410c2da475\") " Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.292216 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-config\") pod \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\" (UID: \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\") " Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.292284 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-dns-svc\") pod \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\" (UID: \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\") " Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.292385 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-dns-swift-storage-0\") pod \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\" (UID: \"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d\") " Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.303831 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc765c5c-7def-4230-b500-d6410c2da475-kube-api-access-q9k6g" (OuterVolumeSpecName: "kube-api-access-q9k6g") pod "fc765c5c-7def-4230-b500-d6410c2da475" (UID: "fc765c5c-7def-4230-b500-d6410c2da475"). InnerVolumeSpecName "kube-api-access-q9k6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.311882 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9k6g\" (UniqueName: \"kubernetes.io/projected/fc765c5c-7def-4230-b500-d6410c2da475-kube-api-access-q9k6g\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.401819 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-config" (OuterVolumeSpecName: "config") pod "fc765c5c-7def-4230-b500-d6410c2da475" (UID: "fc765c5c-7def-4230-b500-d6410c2da475"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.415676 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-config\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.446314 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-kube-api-access-d9vz6" (OuterVolumeSpecName: "kube-api-access-d9vz6") pod "94a8baf5-acb2-4f6d-811c-335fd3bd7b0d" (UID: "94a8baf5-acb2-4f6d-811c-335fd3bd7b0d"). InnerVolumeSpecName "kube-api-access-d9vz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.456621 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fc765c5c-7def-4230-b500-d6410c2da475" (UID: "fc765c5c-7def-4230-b500-d6410c2da475"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.476217 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "94a8baf5-acb2-4f6d-811c-335fd3bd7b0d" (UID: "94a8baf5-acb2-4f6d-811c-335fd3bd7b0d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.485031 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-config" (OuterVolumeSpecName: "config") pod "94a8baf5-acb2-4f6d-811c-335fd3bd7b0d" (UID: "94a8baf5-acb2-4f6d-811c-335fd3bd7b0d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.524815 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "94a8baf5-acb2-4f6d-811c-335fd3bd7b0d" (UID: "94a8baf5-acb2-4f6d-811c-335fd3bd7b0d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.525446 4730 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.525629 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.525706 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9vz6\" (UniqueName: \"kubernetes.io/projected/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-kube-api-access-d9vz6\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.525800 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-config\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.529923 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fc765c5c-7def-4230-b500-d6410c2da475" (UID: "fc765c5c-7def-4230-b500-d6410c2da475"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.534778 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fc765c5c-7def-4230-b500-d6410c2da475" (UID: "fc765c5c-7def-4230-b500-d6410c2da475"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.571213 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "94a8baf5-acb2-4f6d-811c-335fd3bd7b0d" (UID: "94a8baf5-acb2-4f6d-811c-335fd3bd7b0d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.576874 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fc765c5c-7def-4230-b500-d6410c2da475" (UID: "fc765c5c-7def-4230-b500-d6410c2da475"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.577610 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "94a8baf5-acb2-4f6d-811c-335fd3bd7b0d" (UID: "94a8baf5-acb2-4f6d-811c-335fd3bd7b0d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.628049 4730 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.628081 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.628091 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.628100 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.628110 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc765c5c-7def-4230-b500-d6410c2da475-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.628120 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.785391 4730 generic.go:334] "Generic (PLEG): container finished" podID="3f6c808e-d523-48bd-8ec2-28b625834317" containerID="22c1fa447a9712d22a3477c2b5b4f81ffbfd58601afde3e8b272d15c3b1ac1ce" exitCode=1 Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.785705 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3f6c808e-d523-48bd-8ec2-28b625834317","Type":"ContainerDied","Data":"22c1fa447a9712d22a3477c2b5b4f81ffbfd58601afde3e8b272d15c3b1ac1ce"} Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.785820 4730 scope.go:117] "RemoveContainer" containerID="834cc7775c739ec615e04b1c22eba8f136c36cff5d344d3613f2797565551c85" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.786186 4730 scope.go:117] "RemoveContainer" containerID="22c1fa447a9712d22a3477c2b5b4f81ffbfd58601afde3e8b272d15c3b1ac1ce" Mar 20 16:00:50 crc kubenswrapper[4730]: E0320 16:00:50.786538 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(3f6c808e-d523-48bd-8ec2-28b625834317)\"" pod="openstack/watcher-decision-engine-0" podUID="3f6c808e-d523-48bd-8ec2-28b625834317" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.789050 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"553d73e1-f14e-4379-b630-38e440eedb73","Type":"ContainerStarted","Data":"65229dd1742ef08926ef8e9608941aa087b79288d7c286c646e5a92bbdde1bc5"} Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.791759 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dccc4d8b9-qqxcj" event={"ID":"fc765c5c-7def-4230-b500-d6410c2da475","Type":"ContainerDied","Data":"0e83f3546b83e55e3c56c1cd51e6231078002160e2def174a384d53f10d45966"} Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.791826 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dccc4d8b9-qqxcj" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.794010 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8448dbfc69-4229t" event={"ID":"94a8baf5-acb2-4f6d-811c-335fd3bd7b0d","Type":"ContainerDied","Data":"52f7b95fb474271e8f80dffb24f266c810b758a2d4678f7825703e2faf3416e4"} Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.794080 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8448dbfc69-4229t" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.809692 4730 generic.go:334] "Generic (PLEG): container finished" podID="223c97f9-0680-47b8-bc2e-1c914296d29e" containerID="f29767ca3e9d6cdef0508a609251241ac12e93823a37b6304da4f130070ee420" exitCode=0 Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.810622 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.809995 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"223c97f9-0680-47b8-bc2e-1c914296d29e","Type":"ContainerDied","Data":"f29767ca3e9d6cdef0508a609251241ac12e93823a37b6304da4f130070ee420"} Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.889425 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dccc4d8b9-qqxcj"] Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.915338 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6dccc4d8b9-qqxcj"] Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.940654 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8448dbfc69-4229t"] Mar 20 16:00:50 crc kubenswrapper[4730]: I0320 16:00:50.948901 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8448dbfc69-4229t"] Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.157185 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.241467 4730 scope.go:117] "RemoveContainer" containerID="bb60bf8d45c378ac22831a9107db723f03064f274377daf0138584d49caa72c4" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.254037 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/223c97f9-0680-47b8-bc2e-1c914296d29e-sg-core-conf-yaml\") pod \"223c97f9-0680-47b8-bc2e-1c914296d29e\" (UID: \"223c97f9-0680-47b8-bc2e-1c914296d29e\") " Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.254091 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/223c97f9-0680-47b8-bc2e-1c914296d29e-log-httpd\") pod \"223c97f9-0680-47b8-bc2e-1c914296d29e\" (UID: \"223c97f9-0680-47b8-bc2e-1c914296d29e\") " Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.254109 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/223c97f9-0680-47b8-bc2e-1c914296d29e-scripts\") pod \"223c97f9-0680-47b8-bc2e-1c914296d29e\" (UID: \"223c97f9-0680-47b8-bc2e-1c914296d29e\") " Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.254127 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np4pc\" (UniqueName: \"kubernetes.io/projected/223c97f9-0680-47b8-bc2e-1c914296d29e-kube-api-access-np4pc\") pod \"223c97f9-0680-47b8-bc2e-1c914296d29e\" (UID: \"223c97f9-0680-47b8-bc2e-1c914296d29e\") " Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.254147 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/223c97f9-0680-47b8-bc2e-1c914296d29e-combined-ca-bundle\") pod \"223c97f9-0680-47b8-bc2e-1c914296d29e\" (UID: \"223c97f9-0680-47b8-bc2e-1c914296d29e\") " Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.254174 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/223c97f9-0680-47b8-bc2e-1c914296d29e-run-httpd\") pod \"223c97f9-0680-47b8-bc2e-1c914296d29e\" (UID: \"223c97f9-0680-47b8-bc2e-1c914296d29e\") " Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.254267 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/223c97f9-0680-47b8-bc2e-1c914296d29e-config-data\") pod \"223c97f9-0680-47b8-bc2e-1c914296d29e\" (UID: \"223c97f9-0680-47b8-bc2e-1c914296d29e\") " Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.257435 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/223c97f9-0680-47b8-bc2e-1c914296d29e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "223c97f9-0680-47b8-bc2e-1c914296d29e" (UID: "223c97f9-0680-47b8-bc2e-1c914296d29e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.260612 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/223c97f9-0680-47b8-bc2e-1c914296d29e-scripts" (OuterVolumeSpecName: "scripts") pod "223c97f9-0680-47b8-bc2e-1c914296d29e" (UID: "223c97f9-0680-47b8-bc2e-1c914296d29e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.260850 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/223c97f9-0680-47b8-bc2e-1c914296d29e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "223c97f9-0680-47b8-bc2e-1c914296d29e" (UID: "223c97f9-0680-47b8-bc2e-1c914296d29e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.266414 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/223c97f9-0680-47b8-bc2e-1c914296d29e-kube-api-access-np4pc" (OuterVolumeSpecName: "kube-api-access-np4pc") pod "223c97f9-0680-47b8-bc2e-1c914296d29e" (UID: "223c97f9-0680-47b8-bc2e-1c914296d29e"). InnerVolumeSpecName "kube-api-access-np4pc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.355964 4730 scope.go:117] "RemoveContainer" containerID="350035d4ab56eddb1da4280110af09cc41d7fc179ba6730fdfd0cf3c66bc48f4" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.356897 4730 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/223c97f9-0680-47b8-bc2e-1c914296d29e-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.356931 4730 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/223c97f9-0680-47b8-bc2e-1c914296d29e-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.356956 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/223c97f9-0680-47b8-bc2e-1c914296d29e-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.356975 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np4pc\" (UniqueName: \"kubernetes.io/projected/223c97f9-0680-47b8-bc2e-1c914296d29e-kube-api-access-np4pc\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.375401 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/223c97f9-0680-47b8-bc2e-1c914296d29e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "223c97f9-0680-47b8-bc2e-1c914296d29e" (UID: "223c97f9-0680-47b8-bc2e-1c914296d29e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.457916 4730 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/223c97f9-0680-47b8-bc2e-1c914296d29e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.527391 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/223c97f9-0680-47b8-bc2e-1c914296d29e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "223c97f9-0680-47b8-bc2e-1c914296d29e" (UID: "223c97f9-0680-47b8-bc2e-1c914296d29e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.540344 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/223c97f9-0680-47b8-bc2e-1c914296d29e-config-data" (OuterVolumeSpecName: "config-data") pod "223c97f9-0680-47b8-bc2e-1c914296d29e" (UID: "223c97f9-0680-47b8-bc2e-1c914296d29e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.546723 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a8baf5-acb2-4f6d-811c-335fd3bd7b0d" path="/var/lib/kubelet/pods/94a8baf5-acb2-4f6d-811c-335fd3bd7b0d/volumes" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.547218 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc765c5c-7def-4230-b500-d6410c2da475" path="/var/lib/kubelet/pods/fc765c5c-7def-4230-b500-d6410c2da475/volumes" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.561968 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/223c97f9-0680-47b8-bc2e-1c914296d29e-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.562006 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/223c97f9-0680-47b8-bc2e-1c914296d29e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.823049 4730 scope.go:117] "RemoveContainer" containerID="22c1fa447a9712d22a3477c2b5b4f81ffbfd58601afde3e8b272d15c3b1ac1ce" Mar 20 16:00:51 crc kubenswrapper[4730]: E0320 16:00:51.823573 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(3f6c808e-d523-48bd-8ec2-28b625834317)\"" pod="openstack/watcher-decision-engine-0" podUID="3f6c808e-d523-48bd-8ec2-28b625834317" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.829638 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86bc9f54b4-6szxq" event={"ID":"e4dfee88-47ff-4e8b-9f46-60cc17fb0080","Type":"ContainerStarted","Data":"7adc271839acf868f868394423060886ede5743de715667560a83baa8f102638"} Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.829694 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86bc9f54b4-6szxq" event={"ID":"e4dfee88-47ff-4e8b-9f46-60cc17fb0080","Type":"ContainerStarted","Data":"2e378d4becd72b560e4b2faaf1b0e5fd690320acae0a69c92380d07f339ed239"} Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.846011 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6858c8d8f6-k4smz" event={"ID":"4ed9fad7-284f-40b4-9c3b-7a213aff010a","Type":"ContainerStarted","Data":"3635b09696560454a28e6d666babdb61696ccff059aecec39acea6122546c8aa"} Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.846947 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6858c8d8f6-k4smz" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.858728 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-54b9958865-vn9kj" event={"ID":"8f8c40f6-c8d3-4c8c-97eb-643d32774174","Type":"ContainerStarted","Data":"df46bfa0a3f633249b00195c32cbff13d4d5759f46bd1fc7e9e141bb53782460"} Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.858764 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-54b9958865-vn9kj" event={"ID":"8f8c40f6-c8d3-4c8c-97eb-643d32774174","Type":"ContainerStarted","Data":"0d6fcfd02e7f08c9519c6dcde8d39e6abb5db9b7085e1f106645291d4ca7c1b0"} Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.860231 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-86bc9f54b4-6szxq" podStartSLOduration=10.174291041 podStartE2EDuration="14.860217643s" podCreationTimestamp="2026-03-20 16:00:37 +0000 UTC" firstStartedPulling="2026-03-20 16:00:46.347361947 +0000 UTC m=+1305.560733316" lastFinishedPulling="2026-03-20 16:00:51.033288549 +0000 UTC m=+1310.246659918" observedRunningTime="2026-03-20 16:00:51.858011833 +0000 UTC m=+1311.071383192" watchObservedRunningTime="2026-03-20 16:00:51.860217643 +0000 UTC m=+1311.073589012" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.862244 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"211c7770-64fe-4943-becb-bc02113fd867","Type":"ContainerStarted","Data":"6d6a4235c5c9e8bc75e4b872bc2c20836032212d513ddc23145b97813f0b4e12"} Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.873395 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" event={"ID":"ff335b2a-909a-4c39-a045-2267c73ac8b2","Type":"ContainerStarted","Data":"1a5f9090983f451fb355cec1fdd913ee82666d2f07f611bcec31819f098f8126"} Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.873723 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.880723 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.883231 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"223c97f9-0680-47b8-bc2e-1c914296d29e","Type":"ContainerDied","Data":"7f071b5518e739d74a059048c81e33bc96125faedfd609490b9b1dda80135229"} Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.883303 4730 scope.go:117] "RemoveContainer" containerID="af9edb57a02832528fefe2d244021643694b240c8792bf0b8e54f60c03a47e8d" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.892175 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6858c8d8f6-k4smz" podStartSLOduration=6.8921570469999995 podStartE2EDuration="6.892157047s" podCreationTimestamp="2026-03-20 16:00:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:00:51.877715564 +0000 UTC m=+1311.091086933" watchObservedRunningTime="2026-03-20 16:00:51.892157047 +0000 UTC m=+1311.105528406" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.912527 4730 scope.go:117] "RemoveContainer" containerID="71df4b6d20c02608180532f04e26895e3f9ad0248e3128f02a3d2c457f5baa48" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.915767 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-54b9958865-vn9kj" podStartSLOduration=10.529947873 podStartE2EDuration="14.915749324s" podCreationTimestamp="2026-03-20 16:00:37 +0000 UTC" firstStartedPulling="2026-03-20 16:00:46.647429957 +0000 UTC m=+1305.860801326" lastFinishedPulling="2026-03-20 16:00:51.033231408 +0000 UTC m=+1310.246602777" observedRunningTime="2026-03-20 16:00:51.907707345 +0000 UTC m=+1311.121078714" watchObservedRunningTime="2026-03-20 16:00:51.915749324 +0000 UTC m=+1311.129120693" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.916166 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.916327 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.947526 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" podStartSLOduration=6.947510605 podStartE2EDuration="6.947510605s" podCreationTimestamp="2026-03-20 16:00:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:00:51.929071982 +0000 UTC m=+1311.142443371" watchObservedRunningTime="2026-03-20 16:00:51.947510605 +0000 UTC m=+1311.160881974" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.989521 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.992203 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Mar 20 16:00:51 crc kubenswrapper[4730]: I0320 16:00:51.998305 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.017728 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:00:52 crc kubenswrapper[4730]: E0320 16:00:52.018627 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="223c97f9-0680-47b8-bc2e-1c914296d29e" containerName="sg-core" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.018644 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="223c97f9-0680-47b8-bc2e-1c914296d29e" containerName="sg-core" Mar 20 16:00:52 crc kubenswrapper[4730]: E0320 16:00:52.018690 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="223c97f9-0680-47b8-bc2e-1c914296d29e" containerName="proxy-httpd" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.018699 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="223c97f9-0680-47b8-bc2e-1c914296d29e" containerName="proxy-httpd" Mar 20 16:00:52 crc kubenswrapper[4730]: E0320 16:00:52.018723 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94a8baf5-acb2-4f6d-811c-335fd3bd7b0d" containerName="init" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.018731 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="94a8baf5-acb2-4f6d-811c-335fd3bd7b0d" containerName="init" Mar 20 16:00:52 crc kubenswrapper[4730]: E0320 16:00:52.018776 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc765c5c-7def-4230-b500-d6410c2da475" containerName="init" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.018785 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc765c5c-7def-4230-b500-d6410c2da475" containerName="init" Mar 20 16:00:52 crc kubenswrapper[4730]: E0320 16:00:52.018798 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="223c97f9-0680-47b8-bc2e-1c914296d29e" containerName="ceilometer-notification-agent" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.018805 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="223c97f9-0680-47b8-bc2e-1c914296d29e" containerName="ceilometer-notification-agent" Mar 20 16:00:52 crc kubenswrapper[4730]: E0320 16:00:52.018816 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="223c97f9-0680-47b8-bc2e-1c914296d29e" containerName="ceilometer-central-agent" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.018823 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="223c97f9-0680-47b8-bc2e-1c914296d29e" containerName="ceilometer-central-agent" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.019186 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="223c97f9-0680-47b8-bc2e-1c914296d29e" containerName="proxy-httpd" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.019202 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="94a8baf5-acb2-4f6d-811c-335fd3bd7b0d" containerName="init" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.019217 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="223c97f9-0680-47b8-bc2e-1c914296d29e" containerName="ceilometer-notification-agent" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.019234 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="223c97f9-0680-47b8-bc2e-1c914296d29e" containerName="ceilometer-central-agent" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.019280 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc765c5c-7def-4230-b500-d6410c2da475" containerName="init" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.019298 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="223c97f9-0680-47b8-bc2e-1c914296d29e" containerName="sg-core" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.022382 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.025197 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.025457 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.028791 4730 scope.go:117] "RemoveContainer" containerID="f29767ca3e9d6cdef0508a609251241ac12e93823a37b6304da4f130070ee420" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.039382 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.064859 4730 scope.go:117] "RemoveContainer" containerID="d6846337441238e0631dc47666643f2d85b6c9e548d29144f9972ff195d4dc1e" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.125565 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5dc7dd859f-wtxnj"] Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.127738 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5dc7dd859f-wtxnj" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.131582 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.131679 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.136889 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5dc7dd859f-wtxnj"] Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.187088 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe109bf0-70d2-41d2-855c-6eb862e568b6-log-httpd\") pod \"ceilometer-0\" (UID: \"fe109bf0-70d2-41d2-855c-6eb862e568b6\") " pod="openstack/ceilometer-0" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.187140 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe109bf0-70d2-41d2-855c-6eb862e568b6-run-httpd\") pod \"ceilometer-0\" (UID: \"fe109bf0-70d2-41d2-855c-6eb862e568b6\") " pod="openstack/ceilometer-0" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.187159 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe109bf0-70d2-41d2-855c-6eb862e568b6-config-data\") pod \"ceilometer-0\" (UID: \"fe109bf0-70d2-41d2-855c-6eb862e568b6\") " pod="openstack/ceilometer-0" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.187182 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62339bcb-2edc-4881-a15e-a9387442db89-internal-tls-certs\") pod \"neutron-5dc7dd859f-wtxnj\" (UID: \"62339bcb-2edc-4881-a15e-a9387442db89\") " pod="openstack/neutron-5dc7dd859f-wtxnj" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.187203 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/62339bcb-2edc-4881-a15e-a9387442db89-config\") pod \"neutron-5dc7dd859f-wtxnj\" (UID: \"62339bcb-2edc-4881-a15e-a9387442db89\") " pod="openstack/neutron-5dc7dd859f-wtxnj" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.187320 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe109bf0-70d2-41d2-855c-6eb862e568b6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe109bf0-70d2-41d2-855c-6eb862e568b6\") " pod="openstack/ceilometer-0" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.187360 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62339bcb-2edc-4881-a15e-a9387442db89-combined-ca-bundle\") pod \"neutron-5dc7dd859f-wtxnj\" (UID: \"62339bcb-2edc-4881-a15e-a9387442db89\") " pod="openstack/neutron-5dc7dd859f-wtxnj" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.187391 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe109bf0-70d2-41d2-855c-6eb862e568b6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe109bf0-70d2-41d2-855c-6eb862e568b6\") " pod="openstack/ceilometer-0" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.187411 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62339bcb-2edc-4881-a15e-a9387442db89-public-tls-certs\") pod \"neutron-5dc7dd859f-wtxnj\" (UID: \"62339bcb-2edc-4881-a15e-a9387442db89\") " pod="openstack/neutron-5dc7dd859f-wtxnj" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.187435 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62339bcb-2edc-4881-a15e-a9387442db89-httpd-config\") pod \"neutron-5dc7dd859f-wtxnj\" (UID: \"62339bcb-2edc-4881-a15e-a9387442db89\") " pod="openstack/neutron-5dc7dd859f-wtxnj" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.187458 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62339bcb-2edc-4881-a15e-a9387442db89-ovndb-tls-certs\") pod \"neutron-5dc7dd859f-wtxnj\" (UID: \"62339bcb-2edc-4881-a15e-a9387442db89\") " pod="openstack/neutron-5dc7dd859f-wtxnj" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.187477 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z6c7\" (UniqueName: \"kubernetes.io/projected/62339bcb-2edc-4881-a15e-a9387442db89-kube-api-access-6z6c7\") pod \"neutron-5dc7dd859f-wtxnj\" (UID: \"62339bcb-2edc-4881-a15e-a9387442db89\") " pod="openstack/neutron-5dc7dd859f-wtxnj" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.187496 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdtvc\" (UniqueName: \"kubernetes.io/projected/fe109bf0-70d2-41d2-855c-6eb862e568b6-kube-api-access-vdtvc\") pod \"ceilometer-0\" (UID: \"fe109bf0-70d2-41d2-855c-6eb862e568b6\") " pod="openstack/ceilometer-0" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.187515 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe109bf0-70d2-41d2-855c-6eb862e568b6-scripts\") pod \"ceilometer-0\" (UID: \"fe109bf0-70d2-41d2-855c-6eb862e568b6\") " pod="openstack/ceilometer-0" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.288672 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe109bf0-70d2-41d2-855c-6eb862e568b6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe109bf0-70d2-41d2-855c-6eb862e568b6\") " pod="openstack/ceilometer-0" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.288718 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62339bcb-2edc-4881-a15e-a9387442db89-public-tls-certs\") pod \"neutron-5dc7dd859f-wtxnj\" (UID: \"62339bcb-2edc-4881-a15e-a9387442db89\") " pod="openstack/neutron-5dc7dd859f-wtxnj" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.288749 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62339bcb-2edc-4881-a15e-a9387442db89-httpd-config\") pod \"neutron-5dc7dd859f-wtxnj\" (UID: \"62339bcb-2edc-4881-a15e-a9387442db89\") " pod="openstack/neutron-5dc7dd859f-wtxnj" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.288778 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62339bcb-2edc-4881-a15e-a9387442db89-ovndb-tls-certs\") pod \"neutron-5dc7dd859f-wtxnj\" (UID: \"62339bcb-2edc-4881-a15e-a9387442db89\") " pod="openstack/neutron-5dc7dd859f-wtxnj" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.288804 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z6c7\" (UniqueName: \"kubernetes.io/projected/62339bcb-2edc-4881-a15e-a9387442db89-kube-api-access-6z6c7\") pod \"neutron-5dc7dd859f-wtxnj\" (UID: \"62339bcb-2edc-4881-a15e-a9387442db89\") " pod="openstack/neutron-5dc7dd859f-wtxnj" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.288821 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdtvc\" (UniqueName: \"kubernetes.io/projected/fe109bf0-70d2-41d2-855c-6eb862e568b6-kube-api-access-vdtvc\") pod \"ceilometer-0\" (UID: \"fe109bf0-70d2-41d2-855c-6eb862e568b6\") " pod="openstack/ceilometer-0" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.288840 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe109bf0-70d2-41d2-855c-6eb862e568b6-scripts\") pod \"ceilometer-0\" (UID: \"fe109bf0-70d2-41d2-855c-6eb862e568b6\") " pod="openstack/ceilometer-0" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.288870 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe109bf0-70d2-41d2-855c-6eb862e568b6-log-httpd\") pod \"ceilometer-0\" (UID: \"fe109bf0-70d2-41d2-855c-6eb862e568b6\") " pod="openstack/ceilometer-0" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.288892 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe109bf0-70d2-41d2-855c-6eb862e568b6-run-httpd\") pod \"ceilometer-0\" (UID: \"fe109bf0-70d2-41d2-855c-6eb862e568b6\") " pod="openstack/ceilometer-0" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.288907 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe109bf0-70d2-41d2-855c-6eb862e568b6-config-data\") pod \"ceilometer-0\" (UID: \"fe109bf0-70d2-41d2-855c-6eb862e568b6\") " pod="openstack/ceilometer-0" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.288932 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62339bcb-2edc-4881-a15e-a9387442db89-internal-tls-certs\") pod \"neutron-5dc7dd859f-wtxnj\" (UID: \"62339bcb-2edc-4881-a15e-a9387442db89\") " pod="openstack/neutron-5dc7dd859f-wtxnj" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.288950 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/62339bcb-2edc-4881-a15e-a9387442db89-config\") pod \"neutron-5dc7dd859f-wtxnj\" (UID: \"62339bcb-2edc-4881-a15e-a9387442db89\") " pod="openstack/neutron-5dc7dd859f-wtxnj" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.288997 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe109bf0-70d2-41d2-855c-6eb862e568b6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe109bf0-70d2-41d2-855c-6eb862e568b6\") " pod="openstack/ceilometer-0" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.289037 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62339bcb-2edc-4881-a15e-a9387442db89-combined-ca-bundle\") pod \"neutron-5dc7dd859f-wtxnj\" (UID: \"62339bcb-2edc-4881-a15e-a9387442db89\") " pod="openstack/neutron-5dc7dd859f-wtxnj" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.292575 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe109bf0-70d2-41d2-855c-6eb862e568b6-log-httpd\") pod \"ceilometer-0\" (UID: \"fe109bf0-70d2-41d2-855c-6eb862e568b6\") " pod="openstack/ceilometer-0" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.292984 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe109bf0-70d2-41d2-855c-6eb862e568b6-run-httpd\") pod \"ceilometer-0\" (UID: \"fe109bf0-70d2-41d2-855c-6eb862e568b6\") " pod="openstack/ceilometer-0" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.298073 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62339bcb-2edc-4881-a15e-a9387442db89-ovndb-tls-certs\") pod \"neutron-5dc7dd859f-wtxnj\" (UID: \"62339bcb-2edc-4881-a15e-a9387442db89\") " pod="openstack/neutron-5dc7dd859f-wtxnj" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.301260 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62339bcb-2edc-4881-a15e-a9387442db89-public-tls-certs\") pod \"neutron-5dc7dd859f-wtxnj\" (UID: \"62339bcb-2edc-4881-a15e-a9387442db89\") " pod="openstack/neutron-5dc7dd859f-wtxnj" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.302105 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe109bf0-70d2-41d2-855c-6eb862e568b6-scripts\") pod \"ceilometer-0\" (UID: \"fe109bf0-70d2-41d2-855c-6eb862e568b6\") " pod="openstack/ceilometer-0" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.302894 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62339bcb-2edc-4881-a15e-a9387442db89-combined-ca-bundle\") pod \"neutron-5dc7dd859f-wtxnj\" (UID: \"62339bcb-2edc-4881-a15e-a9387442db89\") " pod="openstack/neutron-5dc7dd859f-wtxnj" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.303046 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/62339bcb-2edc-4881-a15e-a9387442db89-config\") pod \"neutron-5dc7dd859f-wtxnj\" (UID: \"62339bcb-2edc-4881-a15e-a9387442db89\") " pod="openstack/neutron-5dc7dd859f-wtxnj" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.303276 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe109bf0-70d2-41d2-855c-6eb862e568b6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe109bf0-70d2-41d2-855c-6eb862e568b6\") " pod="openstack/ceilometer-0" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.303934 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe109bf0-70d2-41d2-855c-6eb862e568b6-config-data\") pod \"ceilometer-0\" (UID: \"fe109bf0-70d2-41d2-855c-6eb862e568b6\") " pod="openstack/ceilometer-0" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.305459 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe109bf0-70d2-41d2-855c-6eb862e568b6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe109bf0-70d2-41d2-855c-6eb862e568b6\") " pod="openstack/ceilometer-0" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.314536 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdtvc\" (UniqueName: \"kubernetes.io/projected/fe109bf0-70d2-41d2-855c-6eb862e568b6-kube-api-access-vdtvc\") pod \"ceilometer-0\" (UID: \"fe109bf0-70d2-41d2-855c-6eb862e568b6\") " pod="openstack/ceilometer-0" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.316387 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62339bcb-2edc-4881-a15e-a9387442db89-internal-tls-certs\") pod \"neutron-5dc7dd859f-wtxnj\" (UID: \"62339bcb-2edc-4881-a15e-a9387442db89\") " pod="openstack/neutron-5dc7dd859f-wtxnj" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.316759 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62339bcb-2edc-4881-a15e-a9387442db89-httpd-config\") pod \"neutron-5dc7dd859f-wtxnj\" (UID: \"62339bcb-2edc-4881-a15e-a9387442db89\") " pod="openstack/neutron-5dc7dd859f-wtxnj" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.341046 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z6c7\" (UniqueName: \"kubernetes.io/projected/62339bcb-2edc-4881-a15e-a9387442db89-kube-api-access-6z6c7\") pod \"neutron-5dc7dd859f-wtxnj\" (UID: \"62339bcb-2edc-4881-a15e-a9387442db89\") " pod="openstack/neutron-5dc7dd859f-wtxnj" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.345228 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.454750 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5dc7dd859f-wtxnj" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.872379 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:00:52 crc kubenswrapper[4730]: W0320 16:00:52.893373 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe109bf0_70d2_41d2_855c_6eb862e568b6.slice/crio-4d621236dd54101584c726bca47e76d324d3fb96b4a4404920b5e18a6a4fbb39 WatchSource:0}: Error finding container 4d621236dd54101584c726bca47e76d324d3fb96b4a4404920b5e18a6a4fbb39: Status 404 returned error can't find the container with id 4d621236dd54101584c726bca47e76d324d3fb96b4a4404920b5e18a6a4fbb39 Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.930988 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"553d73e1-f14e-4379-b630-38e440eedb73","Type":"ContainerStarted","Data":"3644a90df0b76ee2298de84878e1467667a7cf0df46bec7faf290aad4ad0f2a7"} Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.931180 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="553d73e1-f14e-4379-b630-38e440eedb73" containerName="cinder-api-log" containerID="cri-o://65229dd1742ef08926ef8e9608941aa087b79288d7c286c646e5a92bbdde1bc5" gracePeriod=30 Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.931370 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.931489 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="553d73e1-f14e-4379-b630-38e440eedb73" containerName="cinder-api" containerID="cri-o://3644a90df0b76ee2298de84878e1467667a7cf0df46bec7faf290aad4ad0f2a7" gracePeriod=30 Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.959444 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"211c7770-64fe-4943-becb-bc02113fd867","Type":"ContainerStarted","Data":"4f88e22c1abf31db26b2de1e0622d6921372e0cee954fd4aea4d817c4e543d05"} Mar 20 16:00:52 crc kubenswrapper[4730]: I0320 16:00:52.969120 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.969099561 podStartE2EDuration="7.969099561s" podCreationTimestamp="2026-03-20 16:00:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:00:52.957000741 +0000 UTC m=+1312.170372110" watchObservedRunningTime="2026-03-20 16:00:52.969099561 +0000 UTC m=+1312.182470930" Mar 20 16:00:53 crc kubenswrapper[4730]: I0320 16:00:53.169473 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.583057069 podStartE2EDuration="8.169447022s" podCreationTimestamp="2026-03-20 16:00:45 +0000 UTC" firstStartedPulling="2026-03-20 16:00:46.907715118 +0000 UTC m=+1306.121086487" lastFinishedPulling="2026-03-20 16:00:47.494105071 +0000 UTC m=+1306.707476440" observedRunningTime="2026-03-20 16:00:52.989807704 +0000 UTC m=+1312.203179073" watchObservedRunningTime="2026-03-20 16:00:53.169447022 +0000 UTC m=+1312.382818401" Mar 20 16:00:53 crc kubenswrapper[4730]: I0320 16:00:53.176635 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5dc7dd859f-wtxnj"] Mar 20 16:00:53 crc kubenswrapper[4730]: I0320 16:00:53.597465 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="223c97f9-0680-47b8-bc2e-1c914296d29e" path="/var/lib/kubelet/pods/223c97f9-0680-47b8-bc2e-1c914296d29e/volumes" Mar 20 16:00:53 crc kubenswrapper[4730]: I0320 16:00:53.833125 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 16:00:53 crc kubenswrapper[4730]: I0320 16:00:53.961823 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/553d73e1-f14e-4379-b630-38e440eedb73-combined-ca-bundle\") pod \"553d73e1-f14e-4379-b630-38e440eedb73\" (UID: \"553d73e1-f14e-4379-b630-38e440eedb73\") " Mar 20 16:00:53 crc kubenswrapper[4730]: I0320 16:00:53.962089 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/553d73e1-f14e-4379-b630-38e440eedb73-config-data\") pod \"553d73e1-f14e-4379-b630-38e440eedb73\" (UID: \"553d73e1-f14e-4379-b630-38e440eedb73\") " Mar 20 16:00:53 crc kubenswrapper[4730]: I0320 16:00:53.962133 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptrl6\" (UniqueName: \"kubernetes.io/projected/553d73e1-f14e-4379-b630-38e440eedb73-kube-api-access-ptrl6\") pod \"553d73e1-f14e-4379-b630-38e440eedb73\" (UID: \"553d73e1-f14e-4379-b630-38e440eedb73\") " Mar 20 16:00:53 crc kubenswrapper[4730]: I0320 16:00:53.962464 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/553d73e1-f14e-4379-b630-38e440eedb73-config-data-custom\") pod \"553d73e1-f14e-4379-b630-38e440eedb73\" (UID: \"553d73e1-f14e-4379-b630-38e440eedb73\") " Mar 20 16:00:53 crc kubenswrapper[4730]: I0320 16:00:53.962486 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/553d73e1-f14e-4379-b630-38e440eedb73-logs\") pod \"553d73e1-f14e-4379-b630-38e440eedb73\" (UID: \"553d73e1-f14e-4379-b630-38e440eedb73\") " Mar 20 16:00:53 crc kubenswrapper[4730]: I0320 16:00:53.962509 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/553d73e1-f14e-4379-b630-38e440eedb73-etc-machine-id\") pod \"553d73e1-f14e-4379-b630-38e440eedb73\" (UID: \"553d73e1-f14e-4379-b630-38e440eedb73\") " Mar 20 16:00:53 crc kubenswrapper[4730]: I0320 16:00:53.962535 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/553d73e1-f14e-4379-b630-38e440eedb73-scripts\") pod \"553d73e1-f14e-4379-b630-38e440eedb73\" (UID: \"553d73e1-f14e-4379-b630-38e440eedb73\") " Mar 20 16:00:53 crc kubenswrapper[4730]: I0320 16:00:53.965044 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/553d73e1-f14e-4379-b630-38e440eedb73-logs" (OuterVolumeSpecName: "logs") pod "553d73e1-f14e-4379-b630-38e440eedb73" (UID: "553d73e1-f14e-4379-b630-38e440eedb73"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:00:53 crc kubenswrapper[4730]: I0320 16:00:53.965595 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/553d73e1-f14e-4379-b630-38e440eedb73-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "553d73e1-f14e-4379-b630-38e440eedb73" (UID: "553d73e1-f14e-4379-b630-38e440eedb73"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 16:00:53 crc kubenswrapper[4730]: I0320 16:00:53.968404 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/553d73e1-f14e-4379-b630-38e440eedb73-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "553d73e1-f14e-4379-b630-38e440eedb73" (UID: "553d73e1-f14e-4379-b630-38e440eedb73"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:53 crc kubenswrapper[4730]: I0320 16:00:53.976625 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/553d73e1-f14e-4379-b630-38e440eedb73-kube-api-access-ptrl6" (OuterVolumeSpecName: "kube-api-access-ptrl6") pod "553d73e1-f14e-4379-b630-38e440eedb73" (UID: "553d73e1-f14e-4379-b630-38e440eedb73"). InnerVolumeSpecName "kube-api-access-ptrl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:00:53 crc kubenswrapper[4730]: I0320 16:00:53.977451 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/553d73e1-f14e-4379-b630-38e440eedb73-scripts" (OuterVolumeSpecName: "scripts") pod "553d73e1-f14e-4379-b630-38e440eedb73" (UID: "553d73e1-f14e-4379-b630-38e440eedb73"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:53 crc kubenswrapper[4730]: I0320 16:00:53.992572 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe109bf0-70d2-41d2-855c-6eb862e568b6","Type":"ContainerStarted","Data":"8853aa1f17e6388ce020212c8d73958c09bbf6fcc38c4d043313ee458cbde4ad"} Mar 20 16:00:53 crc kubenswrapper[4730]: I0320 16:00:53.992632 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe109bf0-70d2-41d2-855c-6eb862e568b6","Type":"ContainerStarted","Data":"c73d9cedf11e6a3a273cb136651dcbf0aa00b21555ddca3a8c2b2551a6375a21"} Mar 20 16:00:53 crc kubenswrapper[4730]: I0320 16:00:53.992646 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe109bf0-70d2-41d2-855c-6eb862e568b6","Type":"ContainerStarted","Data":"4d621236dd54101584c726bca47e76d324d3fb96b4a4404920b5e18a6a4fbb39"} Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.005135 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/553d73e1-f14e-4379-b630-38e440eedb73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "553d73e1-f14e-4379-b630-38e440eedb73" (UID: "553d73e1-f14e-4379-b630-38e440eedb73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.042688 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/553d73e1-f14e-4379-b630-38e440eedb73-config-data" (OuterVolumeSpecName: "config-data") pod "553d73e1-f14e-4379-b630-38e440eedb73" (UID: "553d73e1-f14e-4379-b630-38e440eedb73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.045378 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dc7dd859f-wtxnj" event={"ID":"62339bcb-2edc-4881-a15e-a9387442db89","Type":"ContainerStarted","Data":"e9bcbd3723d2f293fb36e8e3445df05602c0d62eb7e6982ff997cb3f24bf3fdc"} Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.045837 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dc7dd859f-wtxnj" event={"ID":"62339bcb-2edc-4881-a15e-a9387442db89","Type":"ContainerStarted","Data":"b7fcbce1d55c9ecbe53035d1ab156cb679df475f1a8bcf8ecf5b13bc843b652e"} Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.045855 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dc7dd859f-wtxnj" event={"ID":"62339bcb-2edc-4881-a15e-a9387442db89","Type":"ContainerStarted","Data":"18fb0a41e0b45d2c14e6c163aa9e9e0b6fce377022fcbab6c5e0d14237a1f7c7"} Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.045894 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5dc7dd859f-wtxnj" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.058917 4730 generic.go:334] "Generic (PLEG): container finished" podID="553d73e1-f14e-4379-b630-38e440eedb73" containerID="3644a90df0b76ee2298de84878e1467667a7cf0df46bec7faf290aad4ad0f2a7" exitCode=0 Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.058950 4730 generic.go:334] "Generic (PLEG): container finished" podID="553d73e1-f14e-4379-b630-38e440eedb73" containerID="65229dd1742ef08926ef8e9608941aa087b79288d7c286c646e5a92bbdde1bc5" exitCode=143 Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.059814 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.063924 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"553d73e1-f14e-4379-b630-38e440eedb73","Type":"ContainerDied","Data":"3644a90df0b76ee2298de84878e1467667a7cf0df46bec7faf290aad4ad0f2a7"} Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.063974 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"553d73e1-f14e-4379-b630-38e440eedb73","Type":"ContainerDied","Data":"65229dd1742ef08926ef8e9608941aa087b79288d7c286c646e5a92bbdde1bc5"} Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.063987 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"553d73e1-f14e-4379-b630-38e440eedb73","Type":"ContainerDied","Data":"fa390ac4060d1a6d7727f5c45c81da714c4fb46717c6a3810b619cf41b677f6f"} Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.064005 4730 scope.go:117] "RemoveContainer" containerID="3644a90df0b76ee2298de84878e1467667a7cf0df46bec7faf290aad4ad0f2a7" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.066594 4730 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/553d73e1-f14e-4379-b630-38e440eedb73-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.066624 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/553d73e1-f14e-4379-b630-38e440eedb73-logs\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.067112 4730 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/553d73e1-f14e-4379-b630-38e440eedb73-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.067130 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/553d73e1-f14e-4379-b630-38e440eedb73-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.067140 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/553d73e1-f14e-4379-b630-38e440eedb73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.067151 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/553d73e1-f14e-4379-b630-38e440eedb73-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.067162 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptrl6\" (UniqueName: \"kubernetes.io/projected/553d73e1-f14e-4379-b630-38e440eedb73-kube-api-access-ptrl6\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.072530 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5dc7dd859f-wtxnj" podStartSLOduration=2.072511238 podStartE2EDuration="2.072511238s" podCreationTimestamp="2026-03-20 16:00:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:00:54.066696218 +0000 UTC m=+1313.280067587" watchObservedRunningTime="2026-03-20 16:00:54.072511238 +0000 UTC m=+1313.285882597" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.143384 4730 scope.go:117] "RemoveContainer" containerID="65229dd1742ef08926ef8e9608941aa087b79288d7c286c646e5a92bbdde1bc5" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.243731 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.252599 4730 scope.go:117] "RemoveContainer" containerID="3644a90df0b76ee2298de84878e1467667a7cf0df46bec7faf290aad4ad0f2a7" Mar 20 16:00:54 crc kubenswrapper[4730]: E0320 16:00:54.253897 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3644a90df0b76ee2298de84878e1467667a7cf0df46bec7faf290aad4ad0f2a7\": container with ID starting with 3644a90df0b76ee2298de84878e1467667a7cf0df46bec7faf290aad4ad0f2a7 not found: ID does not exist" containerID="3644a90df0b76ee2298de84878e1467667a7cf0df46bec7faf290aad4ad0f2a7" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.253932 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3644a90df0b76ee2298de84878e1467667a7cf0df46bec7faf290aad4ad0f2a7"} err="failed to get container status \"3644a90df0b76ee2298de84878e1467667a7cf0df46bec7faf290aad4ad0f2a7\": rpc error: code = NotFound desc = could not find container \"3644a90df0b76ee2298de84878e1467667a7cf0df46bec7faf290aad4ad0f2a7\": container with ID starting with 3644a90df0b76ee2298de84878e1467667a7cf0df46bec7faf290aad4ad0f2a7 not found: ID does not exist" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.253952 4730 scope.go:117] "RemoveContainer" containerID="65229dd1742ef08926ef8e9608941aa087b79288d7c286c646e5a92bbdde1bc5" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.255355 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 20 16:00:54 crc kubenswrapper[4730]: E0320 16:00:54.255697 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65229dd1742ef08926ef8e9608941aa087b79288d7c286c646e5a92bbdde1bc5\": container with ID starting with 65229dd1742ef08926ef8e9608941aa087b79288d7c286c646e5a92bbdde1bc5 not found: ID does not exist" containerID="65229dd1742ef08926ef8e9608941aa087b79288d7c286c646e5a92bbdde1bc5" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.255735 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65229dd1742ef08926ef8e9608941aa087b79288d7c286c646e5a92bbdde1bc5"} err="failed to get container status \"65229dd1742ef08926ef8e9608941aa087b79288d7c286c646e5a92bbdde1bc5\": rpc error: code = NotFound desc = could not find container \"65229dd1742ef08926ef8e9608941aa087b79288d7c286c646e5a92bbdde1bc5\": container with ID starting with 65229dd1742ef08926ef8e9608941aa087b79288d7c286c646e5a92bbdde1bc5 not found: ID does not exist" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.255751 4730 scope.go:117] "RemoveContainer" containerID="3644a90df0b76ee2298de84878e1467667a7cf0df46bec7faf290aad4ad0f2a7" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.259613 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3644a90df0b76ee2298de84878e1467667a7cf0df46bec7faf290aad4ad0f2a7"} err="failed to get container status \"3644a90df0b76ee2298de84878e1467667a7cf0df46bec7faf290aad4ad0f2a7\": rpc error: code = NotFound desc = could not find container \"3644a90df0b76ee2298de84878e1467667a7cf0df46bec7faf290aad4ad0f2a7\": container with ID starting with 3644a90df0b76ee2298de84878e1467667a7cf0df46bec7faf290aad4ad0f2a7 not found: ID does not exist" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.259636 4730 scope.go:117] "RemoveContainer" containerID="65229dd1742ef08926ef8e9608941aa087b79288d7c286c646e5a92bbdde1bc5" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.262363 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65229dd1742ef08926ef8e9608941aa087b79288d7c286c646e5a92bbdde1bc5"} err="failed to get container status \"65229dd1742ef08926ef8e9608941aa087b79288d7c286c646e5a92bbdde1bc5\": rpc error: code = NotFound desc = could not find container \"65229dd1742ef08926ef8e9608941aa087b79288d7c286c646e5a92bbdde1bc5\": container with ID starting with 65229dd1742ef08926ef8e9608941aa087b79288d7c286c646e5a92bbdde1bc5 not found: ID does not exist" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.267348 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 20 16:00:54 crc kubenswrapper[4730]: E0320 16:00:54.267796 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="553d73e1-f14e-4379-b630-38e440eedb73" containerName="cinder-api-log" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.267820 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="553d73e1-f14e-4379-b630-38e440eedb73" containerName="cinder-api-log" Mar 20 16:00:54 crc kubenswrapper[4730]: E0320 16:00:54.267851 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="553d73e1-f14e-4379-b630-38e440eedb73" containerName="cinder-api" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.267858 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="553d73e1-f14e-4379-b630-38e440eedb73" containerName="cinder-api" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.268042 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="553d73e1-f14e-4379-b630-38e440eedb73" containerName="cinder-api-log" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.268058 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="553d73e1-f14e-4379-b630-38e440eedb73" containerName="cinder-api" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.269099 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.273959 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.274181 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.274517 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.275972 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.377683 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa\") " pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.377745 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa-scripts\") pod \"cinder-api-0\" (UID: \"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa\") " pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.377780 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa-logs\") pod \"cinder-api-0\" (UID: \"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa\") " pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.377865 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa\") " pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.377931 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbzzr\" (UniqueName: \"kubernetes.io/projected/4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa-kube-api-access-bbzzr\") pod \"cinder-api-0\" (UID: \"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa\") " pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.378028 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa-config-data-custom\") pod \"cinder-api-0\" (UID: \"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa\") " pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.378069 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa\") " pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.378092 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa-config-data\") pod \"cinder-api-0\" (UID: \"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa\") " pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.378147 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa\") " pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.494918 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbzzr\" (UniqueName: \"kubernetes.io/projected/4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa-kube-api-access-bbzzr\") pod \"cinder-api-0\" (UID: \"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa\") " pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.495031 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa-config-data-custom\") pod \"cinder-api-0\" (UID: \"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa\") " pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.495072 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa\") " pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.495102 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa-config-data\") pod \"cinder-api-0\" (UID: \"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa\") " pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.495141 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa\") " pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.495240 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa-scripts\") pod \"cinder-api-0\" (UID: \"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa\") " pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.495286 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa\") " pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.495322 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa-logs\") pod \"cinder-api-0\" (UID: \"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa\") " pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.495391 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa\") " pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.502359 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa\") " pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.502652 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa-scripts\") pod \"cinder-api-0\" (UID: \"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa\") " pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.502871 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa-logs\") pod \"cinder-api-0\" (UID: \"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa\") " pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.509532 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa\") " pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.512144 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa-config-data\") pod \"cinder-api-0\" (UID: \"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa\") " pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.512706 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa\") " pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.513217 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa\") " pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.514912 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa-config-data-custom\") pod \"cinder-api-0\" (UID: \"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa\") " pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.531860 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbzzr\" (UniqueName: \"kubernetes.io/projected/4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa-kube-api-access-bbzzr\") pod \"cinder-api-0\" (UID: \"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa\") " pod="openstack/cinder-api-0" Mar 20 16:00:54 crc kubenswrapper[4730]: I0320 16:00:54.589167 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 16:00:55 crc kubenswrapper[4730]: I0320 16:00:55.100556 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe109bf0-70d2-41d2-855c-6eb862e568b6","Type":"ContainerStarted","Data":"786b95c139b35cbde05b7814738f76287b10b59ef0dc76fe0f5bcee037ab03c4"} Mar 20 16:00:55 crc kubenswrapper[4730]: I0320 16:00:55.294338 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 20 16:00:55 crc kubenswrapper[4730]: I0320 16:00:55.551796 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="553d73e1-f14e-4379-b630-38e440eedb73" path="/var/lib/kubelet/pods/553d73e1-f14e-4379-b630-38e440eedb73/volumes" Mar 20 16:00:55 crc kubenswrapper[4730]: I0320 16:00:55.709392 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 20 16:00:55 crc kubenswrapper[4730]: I0320 16:00:55.968802 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 20 16:00:56 crc kubenswrapper[4730]: I0320 16:00:56.078283 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-66f7c676c8-wdfnw" Mar 20 16:00:56 crc kubenswrapper[4730]: I0320 16:00:56.094380 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" Mar 20 16:00:56 crc kubenswrapper[4730]: I0320 16:00:56.192229 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-66f7c676c8-wdfnw" Mar 20 16:00:56 crc kubenswrapper[4730]: I0320 16:00:56.200170 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77878fc4cf-8s7hs"] Mar 20 16:00:56 crc kubenswrapper[4730]: I0320 16:00:56.200483 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" podUID="82ffcdbb-cebb-443a-a8af-3c3543bea13d" containerName="dnsmasq-dns" containerID="cri-o://a42cd9a54cab5432c7c6a61bb8a81e8c16b97e8e15d954234ce03c8ac58b65f1" gracePeriod=10 Mar 20 16:00:56 crc kubenswrapper[4730]: I0320 16:00:56.230819 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa","Type":"ContainerStarted","Data":"1edd5d33a486fdcefab106ad673dae51ea84827be13cc96e58ca8a06e7819a03"} Mar 20 16:00:56 crc kubenswrapper[4730]: I0320 16:00:56.230862 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa","Type":"ContainerStarted","Data":"625fc2c6bb4bee7ecb35eb34bb8b6fbc56f99ceec30755b63f3bc3ef203e90f6"} Mar 20 16:00:56 crc kubenswrapper[4730]: I0320 16:00:56.354700 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 16:00:56 crc kubenswrapper[4730]: I0320 16:00:56.809295 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" Mar 20 16:00:56 crc kubenswrapper[4730]: I0320 16:00:56.917090 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Mar 20 16:00:56 crc kubenswrapper[4730]: I0320 16:00:56.928836 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Mar 20 16:00:56 crc kubenswrapper[4730]: I0320 16:00:56.972972 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-ovsdbserver-nb\") pod \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\" (UID: \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\") " Mar 20 16:00:56 crc kubenswrapper[4730]: I0320 16:00:56.973052 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-ovsdbserver-sb\") pod \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\" (UID: \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\") " Mar 20 16:00:56 crc kubenswrapper[4730]: I0320 16:00:56.973078 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-dns-svc\") pod \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\" (UID: \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\") " Mar 20 16:00:56 crc kubenswrapper[4730]: I0320 16:00:56.973196 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-config\") pod \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\" (UID: \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\") " Mar 20 16:00:56 crc kubenswrapper[4730]: I0320 16:00:56.973290 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-dns-swift-storage-0\") pod \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\" (UID: \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\") " Mar 20 16:00:56 crc kubenswrapper[4730]: I0320 16:00:56.973307 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5dq8\" (UniqueName: \"kubernetes.io/projected/82ffcdbb-cebb-443a-a8af-3c3543bea13d-kube-api-access-m5dq8\") pod \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\" (UID: \"82ffcdbb-cebb-443a-a8af-3c3543bea13d\") " Mar 20 16:00:56 crc kubenswrapper[4730]: I0320 16:00:56.981434 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82ffcdbb-cebb-443a-a8af-3c3543bea13d-kube-api-access-m5dq8" (OuterVolumeSpecName: "kube-api-access-m5dq8") pod "82ffcdbb-cebb-443a-a8af-3c3543bea13d" (UID: "82ffcdbb-cebb-443a-a8af-3c3543bea13d"). InnerVolumeSpecName "kube-api-access-m5dq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:00:57 crc kubenswrapper[4730]: I0320 16:00:57.046895 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "82ffcdbb-cebb-443a-a8af-3c3543bea13d" (UID: "82ffcdbb-cebb-443a-a8af-3c3543bea13d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:00:57 crc kubenswrapper[4730]: I0320 16:00:57.076113 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "82ffcdbb-cebb-443a-a8af-3c3543bea13d" (UID: "82ffcdbb-cebb-443a-a8af-3c3543bea13d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:00:57 crc kubenswrapper[4730]: I0320 16:00:57.076472 4730 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:57 crc kubenswrapper[4730]: I0320 16:00:57.076497 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5dq8\" (UniqueName: \"kubernetes.io/projected/82ffcdbb-cebb-443a-a8af-3c3543bea13d-kube-api-access-m5dq8\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:57 crc kubenswrapper[4730]: I0320 16:00:57.076507 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:57 crc kubenswrapper[4730]: I0320 16:00:57.080199 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "82ffcdbb-cebb-443a-a8af-3c3543bea13d" (UID: "82ffcdbb-cebb-443a-a8af-3c3543bea13d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:00:57 crc kubenswrapper[4730]: I0320 16:00:57.082365 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-config" (OuterVolumeSpecName: "config") pod "82ffcdbb-cebb-443a-a8af-3c3543bea13d" (UID: "82ffcdbb-cebb-443a-a8af-3c3543bea13d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:00:57 crc kubenswrapper[4730]: I0320 16:00:57.105297 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "82ffcdbb-cebb-443a-a8af-3c3543bea13d" (UID: "82ffcdbb-cebb-443a-a8af-3c3543bea13d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:00:57 crc kubenswrapper[4730]: I0320 16:00:57.178110 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:57 crc kubenswrapper[4730]: I0320 16:00:57.178150 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:57 crc kubenswrapper[4730]: I0320 16:00:57.178161 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82ffcdbb-cebb-443a-a8af-3c3543bea13d-config\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:57 crc kubenswrapper[4730]: I0320 16:00:57.258708 4730 generic.go:334] "Generic (PLEG): container finished" podID="82ffcdbb-cebb-443a-a8af-3c3543bea13d" containerID="a42cd9a54cab5432c7c6a61bb8a81e8c16b97e8e15d954234ce03c8ac58b65f1" exitCode=0 Mar 20 16:00:57 crc kubenswrapper[4730]: I0320 16:00:57.258749 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" event={"ID":"82ffcdbb-cebb-443a-a8af-3c3543bea13d","Type":"ContainerDied","Data":"a42cd9a54cab5432c7c6a61bb8a81e8c16b97e8e15d954234ce03c8ac58b65f1"} Mar 20 16:00:57 crc kubenswrapper[4730]: I0320 16:00:57.259042 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" event={"ID":"82ffcdbb-cebb-443a-a8af-3c3543bea13d","Type":"ContainerDied","Data":"7a2cb82ca156020b21392ad78c85cb4ddbdb0874dad3d5fcc0112cae0cde0511"} Mar 20 16:00:57 crc kubenswrapper[4730]: I0320 16:00:57.258814 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77878fc4cf-8s7hs" Mar 20 16:00:57 crc kubenswrapper[4730]: I0320 16:00:57.259076 4730 scope.go:117] "RemoveContainer" containerID="a42cd9a54cab5432c7c6a61bb8a81e8c16b97e8e15d954234ce03c8ac58b65f1" Mar 20 16:00:57 crc kubenswrapper[4730]: I0320 16:00:57.259364 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="211c7770-64fe-4943-becb-bc02113fd867" containerName="cinder-scheduler" containerID="cri-o://6d6a4235c5c9e8bc75e4b872bc2c20836032212d513ddc23145b97813f0b4e12" gracePeriod=30 Mar 20 16:00:57 crc kubenswrapper[4730]: I0320 16:00:57.259532 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="211c7770-64fe-4943-becb-bc02113fd867" containerName="probe" containerID="cri-o://4f88e22c1abf31db26b2de1e0622d6921372e0cee954fd4aea4d817c4e543d05" gracePeriod=30 Mar 20 16:00:57 crc kubenswrapper[4730]: I0320 16:00:57.266519 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Mar 20 16:00:57 crc kubenswrapper[4730]: I0320 16:00:57.298664 4730 scope.go:117] "RemoveContainer" containerID="8048a1935689b83c55f9f97ca86b535cb03200a9107add7dbb79c33ad2385a52" Mar 20 16:00:57 crc kubenswrapper[4730]: I0320 16:00:57.525408 4730 scope.go:117] "RemoveContainer" containerID="a42cd9a54cab5432c7c6a61bb8a81e8c16b97e8e15d954234ce03c8ac58b65f1" Mar 20 16:00:57 crc kubenswrapper[4730]: E0320 16:00:57.525933 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a42cd9a54cab5432c7c6a61bb8a81e8c16b97e8e15d954234ce03c8ac58b65f1\": container with ID starting with a42cd9a54cab5432c7c6a61bb8a81e8c16b97e8e15d954234ce03c8ac58b65f1 not found: ID does not exist" containerID="a42cd9a54cab5432c7c6a61bb8a81e8c16b97e8e15d954234ce03c8ac58b65f1" Mar 20 16:00:57 crc kubenswrapper[4730]: I0320 16:00:57.525985 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a42cd9a54cab5432c7c6a61bb8a81e8c16b97e8e15d954234ce03c8ac58b65f1"} err="failed to get container status \"a42cd9a54cab5432c7c6a61bb8a81e8c16b97e8e15d954234ce03c8ac58b65f1\": rpc error: code = NotFound desc = could not find container \"a42cd9a54cab5432c7c6a61bb8a81e8c16b97e8e15d954234ce03c8ac58b65f1\": container with ID starting with a42cd9a54cab5432c7c6a61bb8a81e8c16b97e8e15d954234ce03c8ac58b65f1 not found: ID does not exist" Mar 20 16:00:57 crc kubenswrapper[4730]: I0320 16:00:57.526017 4730 scope.go:117] "RemoveContainer" containerID="8048a1935689b83c55f9f97ca86b535cb03200a9107add7dbb79c33ad2385a52" Mar 20 16:00:57 crc kubenswrapper[4730]: E0320 16:00:57.528232 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8048a1935689b83c55f9f97ca86b535cb03200a9107add7dbb79c33ad2385a52\": container with ID starting with 8048a1935689b83c55f9f97ca86b535cb03200a9107add7dbb79c33ad2385a52 not found: ID does not exist" containerID="8048a1935689b83c55f9f97ca86b535cb03200a9107add7dbb79c33ad2385a52" Mar 20 16:00:57 crc kubenswrapper[4730]: I0320 16:00:57.528283 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8048a1935689b83c55f9f97ca86b535cb03200a9107add7dbb79c33ad2385a52"} err="failed to get container status \"8048a1935689b83c55f9f97ca86b535cb03200a9107add7dbb79c33ad2385a52\": rpc error: code = NotFound desc = could not find container \"8048a1935689b83c55f9f97ca86b535cb03200a9107add7dbb79c33ad2385a52\": container with ID starting with 8048a1935689b83c55f9f97ca86b535cb03200a9107add7dbb79c33ad2385a52 not found: ID does not exist" Mar 20 16:00:57 crc kubenswrapper[4730]: I0320 16:00:57.563141 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77878fc4cf-8s7hs"] Mar 20 16:00:57 crc kubenswrapper[4730]: I0320 16:00:57.585190 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77878fc4cf-8s7hs"] Mar 20 16:00:58 crc kubenswrapper[4730]: I0320 16:00:58.100762 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-86947bcbc8-94hl8" Mar 20 16:00:58 crc kubenswrapper[4730]: I0320 16:00:58.268240 4730 generic.go:334] "Generic (PLEG): container finished" podID="211c7770-64fe-4943-becb-bc02113fd867" containerID="4f88e22c1abf31db26b2de1e0622d6921372e0cee954fd4aea4d817c4e543d05" exitCode=0 Mar 20 16:00:58 crc kubenswrapper[4730]: I0320 16:00:58.268293 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"211c7770-64fe-4943-becb-bc02113fd867","Type":"ContainerDied","Data":"4f88e22c1abf31db26b2de1e0622d6921372e0cee954fd4aea4d817c4e543d05"} Mar 20 16:00:58 crc kubenswrapper[4730]: I0320 16:00:58.271793 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe109bf0-70d2-41d2-855c-6eb862e568b6","Type":"ContainerStarted","Data":"73bfac78764299ab2b8b9302430bca90ecedaa10df4b93eb8c976fe0582a6af1"} Mar 20 16:00:58 crc kubenswrapper[4730]: I0320 16:00:58.272983 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 16:00:58 crc kubenswrapper[4730]: I0320 16:00:58.274842 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa","Type":"ContainerStarted","Data":"dddf7ad95d1d0c0afeeb8c41ccf7fe37f8c38c3a24f4285f2ff9b97dfed77c66"} Mar 20 16:00:58 crc kubenswrapper[4730]: I0320 16:00:58.275316 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 20 16:00:58 crc kubenswrapper[4730]: I0320 16:00:58.298565 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.936650938 podStartE2EDuration="7.298531405s" podCreationTimestamp="2026-03-20 16:00:51 +0000 UTC" firstStartedPulling="2026-03-20 16:00:52.909621661 +0000 UTC m=+1312.122993030" lastFinishedPulling="2026-03-20 16:00:57.271502138 +0000 UTC m=+1316.484873497" observedRunningTime="2026-03-20 16:00:58.291296353 +0000 UTC m=+1317.504667712" watchObservedRunningTime="2026-03-20 16:00:58.298531405 +0000 UTC m=+1317.511902774" Mar 20 16:00:58 crc kubenswrapper[4730]: I0320 16:00:58.318745 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.318729167 podStartE2EDuration="4.318729167s" podCreationTimestamp="2026-03-20 16:00:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:00:58.310202876 +0000 UTC m=+1317.523574245" watchObservedRunningTime="2026-03-20 16:00:58.318729167 +0000 UTC m=+1317.532100536" Mar 20 16:00:58 crc kubenswrapper[4730]: I0320 16:00:58.358808 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-86947bcbc8-94hl8" Mar 20 16:00:58 crc kubenswrapper[4730]: I0320 16:00:58.489475 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-66f7c676c8-wdfnw"] Mar 20 16:00:58 crc kubenswrapper[4730]: I0320 16:00:58.489965 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-66f7c676c8-wdfnw" podUID="625a25c3-e585-4848-bbe1-0bdd4be731a9" containerName="barbican-api-log" containerID="cri-o://4b1a913b558b33eb77f5f4bce1a49632877936d63ae7264946c7f559a4317add" gracePeriod=30 Mar 20 16:00:58 crc kubenswrapper[4730]: I0320 16:00:58.490216 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-66f7c676c8-wdfnw" podUID="625a25c3-e585-4848-bbe1-0bdd4be731a9" containerName="barbican-api" containerID="cri-o://36d68edd252a5f941948a382970d79860f0b64a17a4a5f197bbb8ab68f618711" gracePeriod=30 Mar 20 16:00:58 crc kubenswrapper[4730]: I0320 16:00:58.496484 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-66f7c676c8-wdfnw" podUID="625a25c3-e585-4848-bbe1-0bdd4be731a9" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.176:9311/healthcheck\": EOF" Mar 20 16:00:58 crc kubenswrapper[4730]: I0320 16:00:58.496633 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-66f7c676c8-wdfnw" podUID="625a25c3-e585-4848-bbe1-0bdd4be731a9" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.176:9311/healthcheck\": EOF" Mar 20 16:00:58 crc kubenswrapper[4730]: I0320 16:00:58.496726 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-66f7c676c8-wdfnw" podUID="625a25c3-e585-4848-bbe1-0bdd4be731a9" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.176:9311/healthcheck\": EOF" Mar 20 16:00:58 crc kubenswrapper[4730]: I0320 16:00:58.496819 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-66f7c676c8-wdfnw" podUID="625a25c3-e585-4848-bbe1-0bdd4be731a9" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.176:9311/healthcheck\": EOF" Mar 20 16:00:59 crc kubenswrapper[4730]: I0320 16:00:59.313614 4730 generic.go:334] "Generic (PLEG): container finished" podID="211c7770-64fe-4943-becb-bc02113fd867" containerID="6d6a4235c5c9e8bc75e4b872bc2c20836032212d513ddc23145b97813f0b4e12" exitCode=0 Mar 20 16:00:59 crc kubenswrapper[4730]: I0320 16:00:59.313900 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"211c7770-64fe-4943-becb-bc02113fd867","Type":"ContainerDied","Data":"6d6a4235c5c9e8bc75e4b872bc2c20836032212d513ddc23145b97813f0b4e12"} Mar 20 16:00:59 crc kubenswrapper[4730]: I0320 16:00:59.318914 4730 generic.go:334] "Generic (PLEG): container finished" podID="625a25c3-e585-4848-bbe1-0bdd4be731a9" containerID="4b1a913b558b33eb77f5f4bce1a49632877936d63ae7264946c7f559a4317add" exitCode=143 Mar 20 16:00:59 crc kubenswrapper[4730]: I0320 16:00:59.319888 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66f7c676c8-wdfnw" event={"ID":"625a25c3-e585-4848-bbe1-0bdd4be731a9","Type":"ContainerDied","Data":"4b1a913b558b33eb77f5f4bce1a49632877936d63ae7264946c7f559a4317add"} Mar 20 16:00:59 crc kubenswrapper[4730]: I0320 16:00:59.554680 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82ffcdbb-cebb-443a-a8af-3c3543bea13d" path="/var/lib/kubelet/pods/82ffcdbb-cebb-443a-a8af-3c3543bea13d/volumes" Mar 20 16:00:59 crc kubenswrapper[4730]: I0320 16:00:59.577911 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 16:00:59 crc kubenswrapper[4730]: I0320 16:00:59.745234 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srhzx\" (UniqueName: \"kubernetes.io/projected/211c7770-64fe-4943-becb-bc02113fd867-kube-api-access-srhzx\") pod \"211c7770-64fe-4943-becb-bc02113fd867\" (UID: \"211c7770-64fe-4943-becb-bc02113fd867\") " Mar 20 16:00:59 crc kubenswrapper[4730]: I0320 16:00:59.745636 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/211c7770-64fe-4943-becb-bc02113fd867-etc-machine-id\") pod \"211c7770-64fe-4943-becb-bc02113fd867\" (UID: \"211c7770-64fe-4943-becb-bc02113fd867\") " Mar 20 16:00:59 crc kubenswrapper[4730]: I0320 16:00:59.745715 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/211c7770-64fe-4943-becb-bc02113fd867-config-data-custom\") pod \"211c7770-64fe-4943-becb-bc02113fd867\" (UID: \"211c7770-64fe-4943-becb-bc02113fd867\") " Mar 20 16:00:59 crc kubenswrapper[4730]: I0320 16:00:59.745788 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/211c7770-64fe-4943-becb-bc02113fd867-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "211c7770-64fe-4943-becb-bc02113fd867" (UID: "211c7770-64fe-4943-becb-bc02113fd867"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 16:00:59 crc kubenswrapper[4730]: I0320 16:00:59.745872 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/211c7770-64fe-4943-becb-bc02113fd867-config-data\") pod \"211c7770-64fe-4943-becb-bc02113fd867\" (UID: \"211c7770-64fe-4943-becb-bc02113fd867\") " Mar 20 16:00:59 crc kubenswrapper[4730]: I0320 16:00:59.745947 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/211c7770-64fe-4943-becb-bc02113fd867-combined-ca-bundle\") pod \"211c7770-64fe-4943-becb-bc02113fd867\" (UID: \"211c7770-64fe-4943-becb-bc02113fd867\") " Mar 20 16:00:59 crc kubenswrapper[4730]: I0320 16:00:59.745968 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/211c7770-64fe-4943-becb-bc02113fd867-scripts\") pod \"211c7770-64fe-4943-becb-bc02113fd867\" (UID: \"211c7770-64fe-4943-becb-bc02113fd867\") " Mar 20 16:00:59 crc kubenswrapper[4730]: I0320 16:00:59.746340 4730 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/211c7770-64fe-4943-becb-bc02113fd867-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:59 crc kubenswrapper[4730]: I0320 16:00:59.768521 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/211c7770-64fe-4943-becb-bc02113fd867-kube-api-access-srhzx" (OuterVolumeSpecName: "kube-api-access-srhzx") pod "211c7770-64fe-4943-becb-bc02113fd867" (UID: "211c7770-64fe-4943-becb-bc02113fd867"). InnerVolumeSpecName "kube-api-access-srhzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:00:59 crc kubenswrapper[4730]: I0320 16:00:59.769448 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/211c7770-64fe-4943-becb-bc02113fd867-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "211c7770-64fe-4943-becb-bc02113fd867" (UID: "211c7770-64fe-4943-becb-bc02113fd867"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:59 crc kubenswrapper[4730]: I0320 16:00:59.774422 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/211c7770-64fe-4943-becb-bc02113fd867-scripts" (OuterVolumeSpecName: "scripts") pod "211c7770-64fe-4943-becb-bc02113fd867" (UID: "211c7770-64fe-4943-becb-bc02113fd867"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:59 crc kubenswrapper[4730]: I0320 16:00:59.805390 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/211c7770-64fe-4943-becb-bc02113fd867-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "211c7770-64fe-4943-becb-bc02113fd867" (UID: "211c7770-64fe-4943-becb-bc02113fd867"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:59 crc kubenswrapper[4730]: I0320 16:00:59.847721 4730 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/211c7770-64fe-4943-becb-bc02113fd867-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:59 crc kubenswrapper[4730]: I0320 16:00:59.847751 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/211c7770-64fe-4943-becb-bc02113fd867-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:59 crc kubenswrapper[4730]: I0320 16:00:59.847763 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/211c7770-64fe-4943-becb-bc02113fd867-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:59 crc kubenswrapper[4730]: I0320 16:00:59.847771 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srhzx\" (UniqueName: \"kubernetes.io/projected/211c7770-64fe-4943-becb-bc02113fd867-kube-api-access-srhzx\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:59 crc kubenswrapper[4730]: I0320 16:00:59.876110 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/211c7770-64fe-4943-becb-bc02113fd867-config-data" (OuterVolumeSpecName: "config-data") pod "211c7770-64fe-4943-becb-bc02113fd867" (UID: "211c7770-64fe-4943-becb-bc02113fd867"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:00:59 crc kubenswrapper[4730]: I0320 16:00:59.949726 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/211c7770-64fe-4943-becb-bc02113fd867-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:00:59 crc kubenswrapper[4730]: I0320 16:00:59.974071 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 20 16:00:59 crc kubenswrapper[4730]: I0320 16:00:59.974110 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 20 16:00:59 crc kubenswrapper[4730]: I0320 16:00:59.974121 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Mar 20 16:00:59 crc kubenswrapper[4730]: I0320 16:00:59.974836 4730 scope.go:117] "RemoveContainer" containerID="22c1fa447a9712d22a3477c2b5b4f81ffbfd58601afde3e8b272d15c3b1ac1ce" Mar 20 16:00:59 crc kubenswrapper[4730]: E0320 16:00:59.975049 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(3f6c808e-d523-48bd-8ec2-28b625834317)\"" pod="openstack/watcher-decision-engine-0" podUID="3f6c808e-d523-48bd-8ec2-28b625834317" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.137948 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29567041-zmx9n"] Mar 20 16:01:00 crc kubenswrapper[4730]: E0320 16:01:00.138354 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82ffcdbb-cebb-443a-a8af-3c3543bea13d" containerName="init" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.138371 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="82ffcdbb-cebb-443a-a8af-3c3543bea13d" containerName="init" Mar 20 16:01:00 crc kubenswrapper[4730]: E0320 16:01:00.138395 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82ffcdbb-cebb-443a-a8af-3c3543bea13d" containerName="dnsmasq-dns" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.138403 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="82ffcdbb-cebb-443a-a8af-3c3543bea13d" containerName="dnsmasq-dns" Mar 20 16:01:00 crc kubenswrapper[4730]: E0320 16:01:00.138421 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="211c7770-64fe-4943-becb-bc02113fd867" containerName="probe" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.138428 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="211c7770-64fe-4943-becb-bc02113fd867" containerName="probe" Mar 20 16:01:00 crc kubenswrapper[4730]: E0320 16:01:00.138449 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="211c7770-64fe-4943-becb-bc02113fd867" containerName="cinder-scheduler" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.138454 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="211c7770-64fe-4943-becb-bc02113fd867" containerName="cinder-scheduler" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.138643 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="211c7770-64fe-4943-becb-bc02113fd867" containerName="cinder-scheduler" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.138655 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="82ffcdbb-cebb-443a-a8af-3c3543bea13d" containerName="dnsmasq-dns" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.138671 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="211c7770-64fe-4943-becb-bc02113fd867" containerName="probe" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.139336 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29567041-zmx9n" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.149701 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29567041-zmx9n"] Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.265803 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkrlg\" (UniqueName: \"kubernetes.io/projected/d3747d18-1b1e-4c43-ac1a-efeeb453b1ae-kube-api-access-rkrlg\") pod \"keystone-cron-29567041-zmx9n\" (UID: \"d3747d18-1b1e-4c43-ac1a-efeeb453b1ae\") " pod="openstack/keystone-cron-29567041-zmx9n" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.265866 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3747d18-1b1e-4c43-ac1a-efeeb453b1ae-config-data\") pod \"keystone-cron-29567041-zmx9n\" (UID: \"d3747d18-1b1e-4c43-ac1a-efeeb453b1ae\") " pod="openstack/keystone-cron-29567041-zmx9n" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.265891 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3747d18-1b1e-4c43-ac1a-efeeb453b1ae-combined-ca-bundle\") pod \"keystone-cron-29567041-zmx9n\" (UID: \"d3747d18-1b1e-4c43-ac1a-efeeb453b1ae\") " pod="openstack/keystone-cron-29567041-zmx9n" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.265980 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3747d18-1b1e-4c43-ac1a-efeeb453b1ae-fernet-keys\") pod \"keystone-cron-29567041-zmx9n\" (UID: \"d3747d18-1b1e-4c43-ac1a-efeeb453b1ae\") " pod="openstack/keystone-cron-29567041-zmx9n" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.330095 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.330697 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"211c7770-64fe-4943-becb-bc02113fd867","Type":"ContainerDied","Data":"4e67bee5c04834a723f337ad60eefc215ef11d3a4fa1f8a1f937528124d8a7cd"} Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.330734 4730 scope.go:117] "RemoveContainer" containerID="4f88e22c1abf31db26b2de1e0622d6921372e0cee954fd4aea4d817c4e543d05" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.354231 4730 scope.go:117] "RemoveContainer" containerID="6d6a4235c5c9e8bc75e4b872bc2c20836032212d513ddc23145b97813f0b4e12" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.370740 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkrlg\" (UniqueName: \"kubernetes.io/projected/d3747d18-1b1e-4c43-ac1a-efeeb453b1ae-kube-api-access-rkrlg\") pod \"keystone-cron-29567041-zmx9n\" (UID: \"d3747d18-1b1e-4c43-ac1a-efeeb453b1ae\") " pod="openstack/keystone-cron-29567041-zmx9n" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.370814 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3747d18-1b1e-4c43-ac1a-efeeb453b1ae-config-data\") pod \"keystone-cron-29567041-zmx9n\" (UID: \"d3747d18-1b1e-4c43-ac1a-efeeb453b1ae\") " pod="openstack/keystone-cron-29567041-zmx9n" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.370845 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3747d18-1b1e-4c43-ac1a-efeeb453b1ae-combined-ca-bundle\") pod \"keystone-cron-29567041-zmx9n\" (UID: \"d3747d18-1b1e-4c43-ac1a-efeeb453b1ae\") " pod="openstack/keystone-cron-29567041-zmx9n" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.370943 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3747d18-1b1e-4c43-ac1a-efeeb453b1ae-fernet-keys\") pod \"keystone-cron-29567041-zmx9n\" (UID: \"d3747d18-1b1e-4c43-ac1a-efeeb453b1ae\") " pod="openstack/keystone-cron-29567041-zmx9n" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.375029 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3747d18-1b1e-4c43-ac1a-efeeb453b1ae-fernet-keys\") pod \"keystone-cron-29567041-zmx9n\" (UID: \"d3747d18-1b1e-4c43-ac1a-efeeb453b1ae\") " pod="openstack/keystone-cron-29567041-zmx9n" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.378144 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3747d18-1b1e-4c43-ac1a-efeeb453b1ae-combined-ca-bundle\") pod \"keystone-cron-29567041-zmx9n\" (UID: \"d3747d18-1b1e-4c43-ac1a-efeeb453b1ae\") " pod="openstack/keystone-cron-29567041-zmx9n" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.388331 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3747d18-1b1e-4c43-ac1a-efeeb453b1ae-config-data\") pod \"keystone-cron-29567041-zmx9n\" (UID: \"d3747d18-1b1e-4c43-ac1a-efeeb453b1ae\") " pod="openstack/keystone-cron-29567041-zmx9n" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.388558 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.401007 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkrlg\" (UniqueName: \"kubernetes.io/projected/d3747d18-1b1e-4c43-ac1a-efeeb453b1ae-kube-api-access-rkrlg\") pod \"keystone-cron-29567041-zmx9n\" (UID: \"d3747d18-1b1e-4c43-ac1a-efeeb453b1ae\") " pod="openstack/keystone-cron-29567041-zmx9n" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.405936 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.416317 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.418088 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.420687 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.426597 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.493164 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29567041-zmx9n" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.584290 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ff07e31-53ad-49da-941d-607115f965e0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8ff07e31-53ad-49da-941d-607115f965e0\") " pod="openstack/cinder-scheduler-0" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.584449 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff07e31-53ad-49da-941d-607115f965e0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8ff07e31-53ad-49da-941d-607115f965e0\") " pod="openstack/cinder-scheduler-0" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.584489 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ff07e31-53ad-49da-941d-607115f965e0-config-data\") pod \"cinder-scheduler-0\" (UID: \"8ff07e31-53ad-49da-941d-607115f965e0\") " pod="openstack/cinder-scheduler-0" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.584510 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5rql\" (UniqueName: \"kubernetes.io/projected/8ff07e31-53ad-49da-941d-607115f965e0-kube-api-access-v5rql\") pod \"cinder-scheduler-0\" (UID: \"8ff07e31-53ad-49da-941d-607115f965e0\") " pod="openstack/cinder-scheduler-0" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.584590 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ff07e31-53ad-49da-941d-607115f965e0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8ff07e31-53ad-49da-941d-607115f965e0\") " pod="openstack/cinder-scheduler-0" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.584623 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ff07e31-53ad-49da-941d-607115f965e0-scripts\") pod \"cinder-scheduler-0\" (UID: \"8ff07e31-53ad-49da-941d-607115f965e0\") " pod="openstack/cinder-scheduler-0" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.601310 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.601609 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="aabd3bd6-2cee-47b8-9174-ad9ea1415e82" containerName="watcher-api-log" containerID="cri-o://b9554ee31e1f63a199256bec2bb1d2359bd31822a3145d6062ef5270512d5078" gracePeriod=30 Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.601840 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="aabd3bd6-2cee-47b8-9174-ad9ea1415e82" containerName="watcher-api" containerID="cri-o://1b54d6706fe3b9e0df5e79e3bf8df496baa00381f4d47713ff0c38d0b131d0ef" gracePeriod=30 Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.687583 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff07e31-53ad-49da-941d-607115f965e0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8ff07e31-53ad-49da-941d-607115f965e0\") " pod="openstack/cinder-scheduler-0" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.687629 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ff07e31-53ad-49da-941d-607115f965e0-config-data\") pod \"cinder-scheduler-0\" (UID: \"8ff07e31-53ad-49da-941d-607115f965e0\") " pod="openstack/cinder-scheduler-0" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.687649 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5rql\" (UniqueName: \"kubernetes.io/projected/8ff07e31-53ad-49da-941d-607115f965e0-kube-api-access-v5rql\") pod \"cinder-scheduler-0\" (UID: \"8ff07e31-53ad-49da-941d-607115f965e0\") " pod="openstack/cinder-scheduler-0" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.687718 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ff07e31-53ad-49da-941d-607115f965e0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8ff07e31-53ad-49da-941d-607115f965e0\") " pod="openstack/cinder-scheduler-0" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.687739 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ff07e31-53ad-49da-941d-607115f965e0-scripts\") pod \"cinder-scheduler-0\" (UID: \"8ff07e31-53ad-49da-941d-607115f965e0\") " pod="openstack/cinder-scheduler-0" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.687802 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ff07e31-53ad-49da-941d-607115f965e0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8ff07e31-53ad-49da-941d-607115f965e0\") " pod="openstack/cinder-scheduler-0" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.688087 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ff07e31-53ad-49da-941d-607115f965e0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8ff07e31-53ad-49da-941d-607115f965e0\") " pod="openstack/cinder-scheduler-0" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.692808 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ff07e31-53ad-49da-941d-607115f965e0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8ff07e31-53ad-49da-941d-607115f965e0\") " pod="openstack/cinder-scheduler-0" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.694144 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ff07e31-53ad-49da-941d-607115f965e0-config-data\") pod \"cinder-scheduler-0\" (UID: \"8ff07e31-53ad-49da-941d-607115f965e0\") " pod="openstack/cinder-scheduler-0" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.694333 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff07e31-53ad-49da-941d-607115f965e0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8ff07e31-53ad-49da-941d-607115f965e0\") " pod="openstack/cinder-scheduler-0" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.704756 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ff07e31-53ad-49da-941d-607115f965e0-scripts\") pod \"cinder-scheduler-0\" (UID: \"8ff07e31-53ad-49da-941d-607115f965e0\") " pod="openstack/cinder-scheduler-0" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.709177 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5rql\" (UniqueName: \"kubernetes.io/projected/8ff07e31-53ad-49da-941d-607115f965e0-kube-api-access-v5rql\") pod \"cinder-scheduler-0\" (UID: \"8ff07e31-53ad-49da-941d-607115f965e0\") " pod="openstack/cinder-scheduler-0" Mar 20 16:01:00 crc kubenswrapper[4730]: I0320 16:01:00.805709 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 16:01:01 crc kubenswrapper[4730]: I0320 16:01:01.100921 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29567041-zmx9n"] Mar 20 16:01:01 crc kubenswrapper[4730]: I0320 16:01:01.359633 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29567041-zmx9n" event={"ID":"d3747d18-1b1e-4c43-ac1a-efeeb453b1ae","Type":"ContainerStarted","Data":"a19bcb5b8c66a4a747a916cf2427a3e88a6ef33424161955f95912089a76ca70"} Mar 20 16:01:01 crc kubenswrapper[4730]: I0320 16:01:01.392512 4730 generic.go:334] "Generic (PLEG): container finished" podID="aabd3bd6-2cee-47b8-9174-ad9ea1415e82" containerID="b9554ee31e1f63a199256bec2bb1d2359bd31822a3145d6062ef5270512d5078" exitCode=143 Mar 20 16:01:01 crc kubenswrapper[4730]: I0320 16:01:01.392666 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"aabd3bd6-2cee-47b8-9174-ad9ea1415e82","Type":"ContainerDied","Data":"b9554ee31e1f63a199256bec2bb1d2359bd31822a3145d6062ef5270512d5078"} Mar 20 16:01:01 crc kubenswrapper[4730]: I0320 16:01:01.415654 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 16:01:01 crc kubenswrapper[4730]: I0320 16:01:01.568101 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="211c7770-64fe-4943-becb-bc02113fd867" path="/var/lib/kubelet/pods/211c7770-64fe-4943-becb-bc02113fd867/volumes" Mar 20 16:01:02 crc kubenswrapper[4730]: I0320 16:01:02.244725 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="aabd3bd6-2cee-47b8-9174-ad9ea1415e82" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.172:9322/\": read tcp 10.217.0.2:40200->10.217.0.172:9322: read: connection reset by peer" Mar 20 16:01:02 crc kubenswrapper[4730]: I0320 16:01:02.244808 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="aabd3bd6-2cee-47b8-9174-ad9ea1415e82" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.172:9322/\": read tcp 10.217.0.2:40198->10.217.0.172:9322: read: connection reset by peer" Mar 20 16:01:02 crc kubenswrapper[4730]: I0320 16:01:02.425753 4730 generic.go:334] "Generic (PLEG): container finished" podID="aabd3bd6-2cee-47b8-9174-ad9ea1415e82" containerID="1b54d6706fe3b9e0df5e79e3bf8df496baa00381f4d47713ff0c38d0b131d0ef" exitCode=0 Mar 20 16:01:02 crc kubenswrapper[4730]: I0320 16:01:02.426123 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"aabd3bd6-2cee-47b8-9174-ad9ea1415e82","Type":"ContainerDied","Data":"1b54d6706fe3b9e0df5e79e3bf8df496baa00381f4d47713ff0c38d0b131d0ef"} Mar 20 16:01:02 crc kubenswrapper[4730]: I0320 16:01:02.435559 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8ff07e31-53ad-49da-941d-607115f965e0","Type":"ContainerStarted","Data":"54e760b727beebcf73820b5976bbc47da91de66eac2b9c3e4c9962bc51dcc205"} Mar 20 16:01:02 crc kubenswrapper[4730]: I0320 16:01:02.435606 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8ff07e31-53ad-49da-941d-607115f965e0","Type":"ContainerStarted","Data":"eae88a4e3b6fca9302c6691c9973e47a8b0ce9904684def6fd84b8f298a43bec"} Mar 20 16:01:02 crc kubenswrapper[4730]: I0320 16:01:02.446848 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29567041-zmx9n" event={"ID":"d3747d18-1b1e-4c43-ac1a-efeeb453b1ae","Type":"ContainerStarted","Data":"f176bdad8c7c8b3ae0a8f361e9d7c67c47a36840f5e1630151c5c006f2c36ca0"} Mar 20 16:01:02 crc kubenswrapper[4730]: I0320 16:01:02.469839 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29567041-zmx9n" podStartSLOduration=2.46982058 podStartE2EDuration="2.46982058s" podCreationTimestamp="2026-03-20 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:01:02.460976362 +0000 UTC m=+1321.674347741" watchObservedRunningTime="2026-03-20 16:01:02.46982058 +0000 UTC m=+1321.683191949" Mar 20 16:01:02 crc kubenswrapper[4730]: I0320 16:01:02.617726 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-66f7c676c8-wdfnw" podUID="625a25c3-e585-4848-bbe1-0bdd4be731a9" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.176:9311/healthcheck\": read tcp 10.217.0.2:48578->10.217.0.176:9311: read: connection reset by peer" Mar 20 16:01:02 crc kubenswrapper[4730]: I0320 16:01:02.767413 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 20 16:01:02 crc kubenswrapper[4730]: I0320 16:01:02.957066 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-config-data\") pod \"aabd3bd6-2cee-47b8-9174-ad9ea1415e82\" (UID: \"aabd3bd6-2cee-47b8-9174-ad9ea1415e82\") " Mar 20 16:01:02 crc kubenswrapper[4730]: I0320 16:01:02.957487 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-combined-ca-bundle\") pod \"aabd3bd6-2cee-47b8-9174-ad9ea1415e82\" (UID: \"aabd3bd6-2cee-47b8-9174-ad9ea1415e82\") " Mar 20 16:01:02 crc kubenswrapper[4730]: I0320 16:01:02.957566 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-logs\") pod \"aabd3bd6-2cee-47b8-9174-ad9ea1415e82\" (UID: \"aabd3bd6-2cee-47b8-9174-ad9ea1415e82\") " Mar 20 16:01:02 crc kubenswrapper[4730]: I0320 16:01:02.957586 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2lq6\" (UniqueName: \"kubernetes.io/projected/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-kube-api-access-m2lq6\") pod \"aabd3bd6-2cee-47b8-9174-ad9ea1415e82\" (UID: \"aabd3bd6-2cee-47b8-9174-ad9ea1415e82\") " Mar 20 16:01:02 crc kubenswrapper[4730]: I0320 16:01:02.957625 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-custom-prometheus-ca\") pod \"aabd3bd6-2cee-47b8-9174-ad9ea1415e82\" (UID: \"aabd3bd6-2cee-47b8-9174-ad9ea1415e82\") " Mar 20 16:01:02 crc kubenswrapper[4730]: I0320 16:01:02.957872 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-logs" (OuterVolumeSpecName: "logs") pod "aabd3bd6-2cee-47b8-9174-ad9ea1415e82" (UID: "aabd3bd6-2cee-47b8-9174-ad9ea1415e82"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:01:02 crc kubenswrapper[4730]: I0320 16:01:02.958517 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-logs\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:02 crc kubenswrapper[4730]: I0320 16:01:02.964790 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-kube-api-access-m2lq6" (OuterVolumeSpecName: "kube-api-access-m2lq6") pod "aabd3bd6-2cee-47b8-9174-ad9ea1415e82" (UID: "aabd3bd6-2cee-47b8-9174-ad9ea1415e82"). InnerVolumeSpecName "kube-api-access-m2lq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.007906 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aabd3bd6-2cee-47b8-9174-ad9ea1415e82" (UID: "aabd3bd6-2cee-47b8-9174-ad9ea1415e82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.013767 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "aabd3bd6-2cee-47b8-9174-ad9ea1415e82" (UID: "aabd3bd6-2cee-47b8-9174-ad9ea1415e82"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.055759 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-config-data" (OuterVolumeSpecName: "config-data") pod "aabd3bd6-2cee-47b8-9174-ad9ea1415e82" (UID: "aabd3bd6-2cee-47b8-9174-ad9ea1415e82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.060062 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.060090 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.060101 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2lq6\" (UniqueName: \"kubernetes.io/projected/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-kube-api-access-m2lq6\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.060111 4730 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/aabd3bd6-2cee-47b8-9174-ad9ea1415e82-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.326066 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66f7c676c8-wdfnw" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.458006 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.458033 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"aabd3bd6-2cee-47b8-9174-ad9ea1415e82","Type":"ContainerDied","Data":"6d15d2ce82d18fec66394661182780c55d2eb065e0325660570454876023436b"} Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.458095 4730 scope.go:117] "RemoveContainer" containerID="1b54d6706fe3b9e0df5e79e3bf8df496baa00381f4d47713ff0c38d0b131d0ef" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.467756 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8ff07e31-53ad-49da-941d-607115f965e0","Type":"ContainerStarted","Data":"26b8db254782a9a742d6cabf7c1b04ee4c13e5ad7e13bfc014ba09e458ab0f58"} Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.473912 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/625a25c3-e585-4848-bbe1-0bdd4be731a9-config-data-custom\") pod \"625a25c3-e585-4848-bbe1-0bdd4be731a9\" (UID: \"625a25c3-e585-4848-bbe1-0bdd4be731a9\") " Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.474207 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/625a25c3-e585-4848-bbe1-0bdd4be731a9-logs\") pod \"625a25c3-e585-4848-bbe1-0bdd4be731a9\" (UID: \"625a25c3-e585-4848-bbe1-0bdd4be731a9\") " Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.474339 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/625a25c3-e585-4848-bbe1-0bdd4be731a9-combined-ca-bundle\") pod \"625a25c3-e585-4848-bbe1-0bdd4be731a9\" (UID: \"625a25c3-e585-4848-bbe1-0bdd4be731a9\") " Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.474379 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/625a25c3-e585-4848-bbe1-0bdd4be731a9-config-data\") pod \"625a25c3-e585-4848-bbe1-0bdd4be731a9\" (UID: \"625a25c3-e585-4848-bbe1-0bdd4be731a9\") " Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.474648 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfcrj\" (UniqueName: \"kubernetes.io/projected/625a25c3-e585-4848-bbe1-0bdd4be731a9-kube-api-access-nfcrj\") pod \"625a25c3-e585-4848-bbe1-0bdd4be731a9\" (UID: \"625a25c3-e585-4848-bbe1-0bdd4be731a9\") " Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.474951 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/625a25c3-e585-4848-bbe1-0bdd4be731a9-logs" (OuterVolumeSpecName: "logs") pod "625a25c3-e585-4848-bbe1-0bdd4be731a9" (UID: "625a25c3-e585-4848-bbe1-0bdd4be731a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.488586 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/625a25c3-e585-4848-bbe1-0bdd4be731a9-kube-api-access-nfcrj" (OuterVolumeSpecName: "kube-api-access-nfcrj") pod "625a25c3-e585-4848-bbe1-0bdd4be731a9" (UID: "625a25c3-e585-4848-bbe1-0bdd4be731a9"). InnerVolumeSpecName "kube-api-access-nfcrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.488962 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/625a25c3-e585-4848-bbe1-0bdd4be731a9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "625a25c3-e585-4848-bbe1-0bdd4be731a9" (UID: "625a25c3-e585-4848-bbe1-0bdd4be731a9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.495135 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfcrj\" (UniqueName: \"kubernetes.io/projected/625a25c3-e585-4848-bbe1-0bdd4be731a9-kube-api-access-nfcrj\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.495172 4730 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/625a25c3-e585-4848-bbe1-0bdd4be731a9-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.495182 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/625a25c3-e585-4848-bbe1-0bdd4be731a9-logs\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.506941 4730 scope.go:117] "RemoveContainer" containerID="b9554ee31e1f63a199256bec2bb1d2359bd31822a3145d6062ef5270512d5078" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.507038 4730 generic.go:334] "Generic (PLEG): container finished" podID="625a25c3-e585-4848-bbe1-0bdd4be731a9" containerID="36d68edd252a5f941948a382970d79860f0b64a17a4a5f197bbb8ab68f618711" exitCode=0 Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.507191 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66f7c676c8-wdfnw" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.507192 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66f7c676c8-wdfnw" event={"ID":"625a25c3-e585-4848-bbe1-0bdd4be731a9","Type":"ContainerDied","Data":"36d68edd252a5f941948a382970d79860f0b64a17a4a5f197bbb8ab68f618711"} Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.507263 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66f7c676c8-wdfnw" event={"ID":"625a25c3-e585-4848-bbe1-0bdd4be731a9","Type":"ContainerDied","Data":"4051b918d71242b8ca2e41190a654e8dbaba0ff078f4749ad9adbb01f4288ed1"} Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.507317 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.507303632 podStartE2EDuration="3.507303632s" podCreationTimestamp="2026-03-20 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:01:03.489119255 +0000 UTC m=+1322.702490634" watchObservedRunningTime="2026-03-20 16:01:03.507303632 +0000 UTC m=+1322.720675001" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.515688 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/625a25c3-e585-4848-bbe1-0bdd4be731a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "625a25c3-e585-4848-bbe1-0bdd4be731a9" (UID: "625a25c3-e585-4848-bbe1-0bdd4be731a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.598100 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/625a25c3-e585-4848-bbe1-0bdd4be731a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.605168 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/625a25c3-e585-4848-bbe1-0bdd4be731a9-config-data" (OuterVolumeSpecName: "config-data") pod "625a25c3-e585-4848-bbe1-0bdd4be731a9" (UID: "625a25c3-e585-4848-bbe1-0bdd4be731a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.611450 4730 scope.go:117] "RemoveContainer" containerID="36d68edd252a5f941948a382970d79860f0b64a17a4a5f197bbb8ab68f618711" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.615219 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.628352 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.635900 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Mar 20 16:01:03 crc kubenswrapper[4730]: E0320 16:01:03.636293 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="625a25c3-e585-4848-bbe1-0bdd4be731a9" containerName="barbican-api-log" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.636311 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="625a25c3-e585-4848-bbe1-0bdd4be731a9" containerName="barbican-api-log" Mar 20 16:01:03 crc kubenswrapper[4730]: E0320 16:01:03.636329 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aabd3bd6-2cee-47b8-9174-ad9ea1415e82" containerName="watcher-api" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.636335 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabd3bd6-2cee-47b8-9174-ad9ea1415e82" containerName="watcher-api" Mar 20 16:01:03 crc kubenswrapper[4730]: E0320 16:01:03.636349 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aabd3bd6-2cee-47b8-9174-ad9ea1415e82" containerName="watcher-api-log" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.636355 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabd3bd6-2cee-47b8-9174-ad9ea1415e82" containerName="watcher-api-log" Mar 20 16:01:03 crc kubenswrapper[4730]: E0320 16:01:03.636375 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="625a25c3-e585-4848-bbe1-0bdd4be731a9" containerName="barbican-api" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.636383 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="625a25c3-e585-4848-bbe1-0bdd4be731a9" containerName="barbican-api" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.636589 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="625a25c3-e585-4848-bbe1-0bdd4be731a9" containerName="barbican-api-log" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.636615 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="aabd3bd6-2cee-47b8-9174-ad9ea1415e82" containerName="watcher-api" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.636626 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="625a25c3-e585-4848-bbe1-0bdd4be731a9" containerName="barbican-api" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.636643 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="aabd3bd6-2cee-47b8-9174-ad9ea1415e82" containerName="watcher-api-log" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.637611 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.641102 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.641269 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.644377 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.659350 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.660604 4730 scope.go:117] "RemoveContainer" containerID="4b1a913b558b33eb77f5f4bce1a49632877936d63ae7264946c7f559a4317add" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.699761 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/625a25c3-e585-4848-bbe1-0bdd4be731a9-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.700767 4730 scope.go:117] "RemoveContainer" containerID="36d68edd252a5f941948a382970d79860f0b64a17a4a5f197bbb8ab68f618711" Mar 20 16:01:03 crc kubenswrapper[4730]: E0320 16:01:03.704659 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36d68edd252a5f941948a382970d79860f0b64a17a4a5f197bbb8ab68f618711\": container with ID starting with 36d68edd252a5f941948a382970d79860f0b64a17a4a5f197bbb8ab68f618711 not found: ID does not exist" containerID="36d68edd252a5f941948a382970d79860f0b64a17a4a5f197bbb8ab68f618711" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.704710 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36d68edd252a5f941948a382970d79860f0b64a17a4a5f197bbb8ab68f618711"} err="failed to get container status \"36d68edd252a5f941948a382970d79860f0b64a17a4a5f197bbb8ab68f618711\": rpc error: code = NotFound desc = could not find container \"36d68edd252a5f941948a382970d79860f0b64a17a4a5f197bbb8ab68f618711\": container with ID starting with 36d68edd252a5f941948a382970d79860f0b64a17a4a5f197bbb8ab68f618711 not found: ID does not exist" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.704739 4730 scope.go:117] "RemoveContainer" containerID="4b1a913b558b33eb77f5f4bce1a49632877936d63ae7264946c7f559a4317add" Mar 20 16:01:03 crc kubenswrapper[4730]: E0320 16:01:03.711451 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b1a913b558b33eb77f5f4bce1a49632877936d63ae7264946c7f559a4317add\": container with ID starting with 4b1a913b558b33eb77f5f4bce1a49632877936d63ae7264946c7f559a4317add not found: ID does not exist" containerID="4b1a913b558b33eb77f5f4bce1a49632877936d63ae7264946c7f559a4317add" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.711506 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b1a913b558b33eb77f5f4bce1a49632877936d63ae7264946c7f559a4317add"} err="failed to get container status \"4b1a913b558b33eb77f5f4bce1a49632877936d63ae7264946c7f559a4317add\": rpc error: code = NotFound desc = could not find container \"4b1a913b558b33eb77f5f4bce1a49632877936d63ae7264946c7f559a4317add\": container with ID starting with 4b1a913b558b33eb77f5f4bce1a49632877936d63ae7264946c7f559a4317add not found: ID does not exist" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.800761 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-config-data\") pod \"watcher-api-0\" (UID: \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\") " pod="openstack/watcher-api-0" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.800793 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-logs\") pod \"watcher-api-0\" (UID: \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\") " pod="openstack/watcher-api-0" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.800818 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-public-tls-certs\") pod \"watcher-api-0\" (UID: \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\") " pod="openstack/watcher-api-0" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.801597 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\") " pod="openstack/watcher-api-0" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.801654 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk5wd\" (UniqueName: \"kubernetes.io/projected/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-kube-api-access-fk5wd\") pod \"watcher-api-0\" (UID: \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\") " pod="openstack/watcher-api-0" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.801821 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\") " pod="openstack/watcher-api-0" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.801915 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\") " pod="openstack/watcher-api-0" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.854180 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-66f7c676c8-wdfnw"] Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.863554 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-66f7c676c8-wdfnw"] Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.903639 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-public-tls-certs\") pod \"watcher-api-0\" (UID: \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\") " pod="openstack/watcher-api-0" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.903711 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\") " pod="openstack/watcher-api-0" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.903746 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk5wd\" (UniqueName: \"kubernetes.io/projected/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-kube-api-access-fk5wd\") pod \"watcher-api-0\" (UID: \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\") " pod="openstack/watcher-api-0" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.903825 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\") " pod="openstack/watcher-api-0" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.903880 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\") " pod="openstack/watcher-api-0" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.903962 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-config-data\") pod \"watcher-api-0\" (UID: \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\") " pod="openstack/watcher-api-0" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.904202 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-logs\") pod \"watcher-api-0\" (UID: \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\") " pod="openstack/watcher-api-0" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.904421 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-logs\") pod \"watcher-api-0\" (UID: \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\") " pod="openstack/watcher-api-0" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.908801 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-config-data\") pod \"watcher-api-0\" (UID: \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\") " pod="openstack/watcher-api-0" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.908959 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\") " pod="openstack/watcher-api-0" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.912580 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-public-tls-certs\") pod \"watcher-api-0\" (UID: \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\") " pod="openstack/watcher-api-0" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.912652 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\") " pod="openstack/watcher-api-0" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.922920 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\") " pod="openstack/watcher-api-0" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.926155 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk5wd\" (UniqueName: \"kubernetes.io/projected/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-kube-api-access-fk5wd\") pod \"watcher-api-0\" (UID: \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\") " pod="openstack/watcher-api-0" Mar 20 16:01:03 crc kubenswrapper[4730]: I0320 16:01:03.981717 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 20 16:01:04 crc kubenswrapper[4730]: I0320 16:01:04.555754 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Mar 20 16:01:05 crc kubenswrapper[4730]: I0320 16:01:05.528517 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a","Type":"ContainerStarted","Data":"cdc802e9a5716d718ebb13c2b13971164edec56068203d16169206897ba03b6a"} Mar 20 16:01:05 crc kubenswrapper[4730]: I0320 16:01:05.529126 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Mar 20 16:01:05 crc kubenswrapper[4730]: I0320 16:01:05.529143 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a","Type":"ContainerStarted","Data":"c743230ba78e3e74fccce83315455e43e5c471c2e21f3d4ac91af747ed0b8301"} Mar 20 16:01:05 crc kubenswrapper[4730]: I0320 16:01:05.529165 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a","Type":"ContainerStarted","Data":"4c2c6a630c73d61868e537787c0ffc30dd54b3a6ece7e73f02c494b3afd3c924"} Mar 20 16:01:05 crc kubenswrapper[4730]: I0320 16:01:05.559333 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.5593102009999997 podStartE2EDuration="2.559310201s" podCreationTimestamp="2026-03-20 16:01:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:01:05.558074234 +0000 UTC m=+1324.771445613" watchObservedRunningTime="2026-03-20 16:01:05.559310201 +0000 UTC m=+1324.772681570" Mar 20 16:01:05 crc kubenswrapper[4730]: I0320 16:01:05.559826 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="625a25c3-e585-4848-bbe1-0bdd4be731a9" path="/var/lib/kubelet/pods/625a25c3-e585-4848-bbe1-0bdd4be731a9/volumes" Mar 20 16:01:05 crc kubenswrapper[4730]: I0320 16:01:05.560613 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aabd3bd6-2cee-47b8-9174-ad9ea1415e82" path="/var/lib/kubelet/pods/aabd3bd6-2cee-47b8-9174-ad9ea1415e82/volumes" Mar 20 16:01:05 crc kubenswrapper[4730]: I0320 16:01:05.806735 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 20 16:01:05 crc kubenswrapper[4730]: I0320 16:01:05.986771 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-78b446cdb6-zs6nw" Mar 20 16:01:06 crc kubenswrapper[4730]: I0320 16:01:06.097069 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-78b446cdb6-zs6nw" Mar 20 16:01:06 crc kubenswrapper[4730]: I0320 16:01:06.373851 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6fb7949f77-2l9t7" Mar 20 16:01:06 crc kubenswrapper[4730]: I0320 16:01:06.540289 4730 generic.go:334] "Generic (PLEG): container finished" podID="d3747d18-1b1e-4c43-ac1a-efeeb453b1ae" containerID="f176bdad8c7c8b3ae0a8f361e9d7c67c47a36840f5e1630151c5c006f2c36ca0" exitCode=0 Mar 20 16:01:06 crc kubenswrapper[4730]: I0320 16:01:06.540376 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29567041-zmx9n" event={"ID":"d3747d18-1b1e-4c43-ac1a-efeeb453b1ae","Type":"ContainerDied","Data":"f176bdad8c7c8b3ae0a8f361e9d7c67c47a36840f5e1630151c5c006f2c36ca0"} Mar 20 16:01:07 crc kubenswrapper[4730]: I0320 16:01:07.377659 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 20 16:01:07 crc kubenswrapper[4730]: I0320 16:01:07.956034 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29567041-zmx9n" Mar 20 16:01:08 crc kubenswrapper[4730]: I0320 16:01:08.088535 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3747d18-1b1e-4c43-ac1a-efeeb453b1ae-config-data\") pod \"d3747d18-1b1e-4c43-ac1a-efeeb453b1ae\" (UID: \"d3747d18-1b1e-4c43-ac1a-efeeb453b1ae\") " Mar 20 16:01:08 crc kubenswrapper[4730]: I0320 16:01:08.088593 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3747d18-1b1e-4c43-ac1a-efeeb453b1ae-combined-ca-bundle\") pod \"d3747d18-1b1e-4c43-ac1a-efeeb453b1ae\" (UID: \"d3747d18-1b1e-4c43-ac1a-efeeb453b1ae\") " Mar 20 16:01:08 crc kubenswrapper[4730]: I0320 16:01:08.088728 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkrlg\" (UniqueName: \"kubernetes.io/projected/d3747d18-1b1e-4c43-ac1a-efeeb453b1ae-kube-api-access-rkrlg\") pod \"d3747d18-1b1e-4c43-ac1a-efeeb453b1ae\" (UID: \"d3747d18-1b1e-4c43-ac1a-efeeb453b1ae\") " Mar 20 16:01:08 crc kubenswrapper[4730]: I0320 16:01:08.088811 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3747d18-1b1e-4c43-ac1a-efeeb453b1ae-fernet-keys\") pod \"d3747d18-1b1e-4c43-ac1a-efeeb453b1ae\" (UID: \"d3747d18-1b1e-4c43-ac1a-efeeb453b1ae\") " Mar 20 16:01:08 crc kubenswrapper[4730]: I0320 16:01:08.106935 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3747d18-1b1e-4c43-ac1a-efeeb453b1ae-kube-api-access-rkrlg" (OuterVolumeSpecName: "kube-api-access-rkrlg") pod "d3747d18-1b1e-4c43-ac1a-efeeb453b1ae" (UID: "d3747d18-1b1e-4c43-ac1a-efeeb453b1ae"). InnerVolumeSpecName "kube-api-access-rkrlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:01:08 crc kubenswrapper[4730]: I0320 16:01:08.112177 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3747d18-1b1e-4c43-ac1a-efeeb453b1ae-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d3747d18-1b1e-4c43-ac1a-efeeb453b1ae" (UID: "d3747d18-1b1e-4c43-ac1a-efeeb453b1ae"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:01:08 crc kubenswrapper[4730]: I0320 16:01:08.133489 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3747d18-1b1e-4c43-ac1a-efeeb453b1ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3747d18-1b1e-4c43-ac1a-efeeb453b1ae" (UID: "d3747d18-1b1e-4c43-ac1a-efeeb453b1ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:01:08 crc kubenswrapper[4730]: I0320 16:01:08.162354 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-66f7c676c8-wdfnw" podUID="625a25c3-e585-4848-bbe1-0bdd4be731a9" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.176:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 16:01:08 crc kubenswrapper[4730]: I0320 16:01:08.162808 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-66f7c676c8-wdfnw" podUID="625a25c3-e585-4848-bbe1-0bdd4be731a9" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.176:9311/healthcheck\": dial tcp 10.217.0.176:9311: i/o timeout (Client.Timeout exceeded while awaiting headers)" Mar 20 16:01:08 crc kubenswrapper[4730]: I0320 16:01:08.187400 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3747d18-1b1e-4c43-ac1a-efeeb453b1ae-config-data" (OuterVolumeSpecName: "config-data") pod "d3747d18-1b1e-4c43-ac1a-efeeb453b1ae" (UID: "d3747d18-1b1e-4c43-ac1a-efeeb453b1ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:01:08 crc kubenswrapper[4730]: I0320 16:01:08.190515 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkrlg\" (UniqueName: \"kubernetes.io/projected/d3747d18-1b1e-4c43-ac1a-efeeb453b1ae-kube-api-access-rkrlg\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:08 crc kubenswrapper[4730]: I0320 16:01:08.190547 4730 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3747d18-1b1e-4c43-ac1a-efeeb453b1ae-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:08 crc kubenswrapper[4730]: I0320 16:01:08.190558 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3747d18-1b1e-4c43-ac1a-efeeb453b1ae-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:08 crc kubenswrapper[4730]: I0320 16:01:08.190572 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3747d18-1b1e-4c43-ac1a-efeeb453b1ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:08 crc kubenswrapper[4730]: I0320 16:01:08.399789 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Mar 20 16:01:08 crc kubenswrapper[4730]: I0320 16:01:08.563033 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29567041-zmx9n" event={"ID":"d3747d18-1b1e-4c43-ac1a-efeeb453b1ae","Type":"ContainerDied","Data":"a19bcb5b8c66a4a747a916cf2427a3e88a6ef33424161955f95912089a76ca70"} Mar 20 16:01:08 crc kubenswrapper[4730]: I0320 16:01:08.563097 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a19bcb5b8c66a4a747a916cf2427a3e88a6ef33424161955f95912089a76ca70" Mar 20 16:01:08 crc kubenswrapper[4730]: I0320 16:01:08.563104 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29567041-zmx9n" Mar 20 16:01:08 crc kubenswrapper[4730]: I0320 16:01:08.982557 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Mar 20 16:01:09 crc kubenswrapper[4730]: I0320 16:01:09.400095 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 20 16:01:09 crc kubenswrapper[4730]: E0320 16:01:09.400782 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3747d18-1b1e-4c43-ac1a-efeeb453b1ae" containerName="keystone-cron" Mar 20 16:01:09 crc kubenswrapper[4730]: I0320 16:01:09.400794 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3747d18-1b1e-4c43-ac1a-efeeb453b1ae" containerName="keystone-cron" Mar 20 16:01:09 crc kubenswrapper[4730]: I0320 16:01:09.400975 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3747d18-1b1e-4c43-ac1a-efeeb453b1ae" containerName="keystone-cron" Mar 20 16:01:09 crc kubenswrapper[4730]: I0320 16:01:09.401666 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 16:01:09 crc kubenswrapper[4730]: I0320 16:01:09.403562 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-6tzwx" Mar 20 16:01:09 crc kubenswrapper[4730]: I0320 16:01:09.404627 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 20 16:01:09 crc kubenswrapper[4730]: I0320 16:01:09.405650 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 20 16:01:09 crc kubenswrapper[4730]: I0320 16:01:09.418404 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 20 16:01:09 crc kubenswrapper[4730]: I0320 16:01:09.510612 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a893eba7-9715-4599-93c2-0365a45134e9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a893eba7-9715-4599-93c2-0365a45134e9\") " pod="openstack/openstackclient" Mar 20 16:01:09 crc kubenswrapper[4730]: I0320 16:01:09.510704 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a893eba7-9715-4599-93c2-0365a45134e9-openstack-config\") pod \"openstackclient\" (UID: \"a893eba7-9715-4599-93c2-0365a45134e9\") " pod="openstack/openstackclient" Mar 20 16:01:09 crc kubenswrapper[4730]: I0320 16:01:09.510805 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a893eba7-9715-4599-93c2-0365a45134e9-openstack-config-secret\") pod \"openstackclient\" (UID: \"a893eba7-9715-4599-93c2-0365a45134e9\") " pod="openstack/openstackclient" Mar 20 16:01:09 crc kubenswrapper[4730]: I0320 16:01:09.510838 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khvd7\" (UniqueName: \"kubernetes.io/projected/a893eba7-9715-4599-93c2-0365a45134e9-kube-api-access-khvd7\") pod \"openstackclient\" (UID: \"a893eba7-9715-4599-93c2-0365a45134e9\") " pod="openstack/openstackclient" Mar 20 16:01:09 crc kubenswrapper[4730]: I0320 16:01:09.612326 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a893eba7-9715-4599-93c2-0365a45134e9-openstack-config-secret\") pod \"openstackclient\" (UID: \"a893eba7-9715-4599-93c2-0365a45134e9\") " pod="openstack/openstackclient" Mar 20 16:01:09 crc kubenswrapper[4730]: I0320 16:01:09.612641 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khvd7\" (UniqueName: \"kubernetes.io/projected/a893eba7-9715-4599-93c2-0365a45134e9-kube-api-access-khvd7\") pod \"openstackclient\" (UID: \"a893eba7-9715-4599-93c2-0365a45134e9\") " pod="openstack/openstackclient" Mar 20 16:01:09 crc kubenswrapper[4730]: I0320 16:01:09.612809 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a893eba7-9715-4599-93c2-0365a45134e9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a893eba7-9715-4599-93c2-0365a45134e9\") " pod="openstack/openstackclient" Mar 20 16:01:09 crc kubenswrapper[4730]: I0320 16:01:09.612972 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a893eba7-9715-4599-93c2-0365a45134e9-openstack-config\") pod \"openstackclient\" (UID: \"a893eba7-9715-4599-93c2-0365a45134e9\") " pod="openstack/openstackclient" Mar 20 16:01:09 crc kubenswrapper[4730]: I0320 16:01:09.614041 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a893eba7-9715-4599-93c2-0365a45134e9-openstack-config\") pod \"openstackclient\" (UID: \"a893eba7-9715-4599-93c2-0365a45134e9\") " pod="openstack/openstackclient" Mar 20 16:01:09 crc kubenswrapper[4730]: I0320 16:01:09.622664 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a893eba7-9715-4599-93c2-0365a45134e9-openstack-config-secret\") pod \"openstackclient\" (UID: \"a893eba7-9715-4599-93c2-0365a45134e9\") " pod="openstack/openstackclient" Mar 20 16:01:09 crc kubenswrapper[4730]: I0320 16:01:09.624062 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a893eba7-9715-4599-93c2-0365a45134e9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a893eba7-9715-4599-93c2-0365a45134e9\") " pod="openstack/openstackclient" Mar 20 16:01:09 crc kubenswrapper[4730]: I0320 16:01:09.634652 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khvd7\" (UniqueName: \"kubernetes.io/projected/a893eba7-9715-4599-93c2-0365a45134e9-kube-api-access-khvd7\") pod \"openstackclient\" (UID: \"a893eba7-9715-4599-93c2-0365a45134e9\") " pod="openstack/openstackclient" Mar 20 16:01:09 crc kubenswrapper[4730]: I0320 16:01:09.720204 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 16:01:10 crc kubenswrapper[4730]: I0320 16:01:10.206336 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 20 16:01:10 crc kubenswrapper[4730]: W0320 16:01:10.210440 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda893eba7_9715_4599_93c2_0365a45134e9.slice/crio-b3e08788439002bcf967b00fb1086c9348f760c4292faa4ae4e4280cc7aca24d WatchSource:0}: Error finding container b3e08788439002bcf967b00fb1086c9348f760c4292faa4ae4e4280cc7aca24d: Status 404 returned error can't find the container with id b3e08788439002bcf967b00fb1086c9348f760c4292faa4ae4e4280cc7aca24d Mar 20 16:01:10 crc kubenswrapper[4730]: I0320 16:01:10.586174 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a893eba7-9715-4599-93c2-0365a45134e9","Type":"ContainerStarted","Data":"b3e08788439002bcf967b00fb1086c9348f760c4292faa4ae4e4280cc7aca24d"} Mar 20 16:01:11 crc kubenswrapper[4730]: I0320 16:01:11.002447 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 20 16:01:12 crc kubenswrapper[4730]: I0320 16:01:12.533503 4730 scope.go:117] "RemoveContainer" containerID="22c1fa447a9712d22a3477c2b5b4f81ffbfd58601afde3e8b272d15c3b1ac1ce" Mar 20 16:01:12 crc kubenswrapper[4730]: I0320 16:01:12.880599 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:01:12 crc kubenswrapper[4730]: I0320 16:01:12.880674 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:01:13 crc kubenswrapper[4730]: I0320 16:01:13.629391 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3f6c808e-d523-48bd-8ec2-28b625834317","Type":"ContainerStarted","Data":"ca3ac5b513d25322badcc2bf19b245d687c9ccf8bff6c35cf5794c95ec2ab964"} Mar 20 16:01:13 crc kubenswrapper[4730]: I0320 16:01:13.982585 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Mar 20 16:01:13 crc kubenswrapper[4730]: I0320 16:01:13.996184 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 16:01:13 crc kubenswrapper[4730]: I0320 16:01:13.996528 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="14cdd4b7-7a81-469f-ae2f-104b054cc583" containerName="glance-log" containerID="cri-o://1eef3a4c57a302c49282046a25ce9b6b686742d1068d8539a0c4f898222c31dc" gracePeriod=30 Mar 20 16:01:13 crc kubenswrapper[4730]: I0320 16:01:13.996556 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="14cdd4b7-7a81-469f-ae2f-104b054cc583" containerName="glance-httpd" containerID="cri-o://8f1291a8157b5fac6e0adc6431d157cb584ae26777485369c5a717b5d22da62c" gracePeriod=30 Mar 20 16:01:14 crc kubenswrapper[4730]: I0320 16:01:14.030025 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Mar 20 16:01:14 crc kubenswrapper[4730]: I0320 16:01:14.642312 4730 generic.go:334] "Generic (PLEG): container finished" podID="14cdd4b7-7a81-469f-ae2f-104b054cc583" containerID="1eef3a4c57a302c49282046a25ce9b6b686742d1068d8539a0c4f898222c31dc" exitCode=143 Mar 20 16:01:14 crc kubenswrapper[4730]: I0320 16:01:14.642360 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"14cdd4b7-7a81-469f-ae2f-104b054cc583","Type":"ContainerDied","Data":"1eef3a4c57a302c49282046a25ce9b6b686742d1068d8539a0c4f898222c31dc"} Mar 20 16:01:14 crc kubenswrapper[4730]: I0320 16:01:14.654492 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Mar 20 16:01:14 crc kubenswrapper[4730]: I0320 16:01:14.896683 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:01:14 crc kubenswrapper[4730]: I0320 16:01:14.896974 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe109bf0-70d2-41d2-855c-6eb862e568b6" containerName="ceilometer-central-agent" containerID="cri-o://c73d9cedf11e6a3a273cb136651dcbf0aa00b21555ddca3a8c2b2551a6375a21" gracePeriod=30 Mar 20 16:01:14 crc kubenswrapper[4730]: I0320 16:01:14.897351 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe109bf0-70d2-41d2-855c-6eb862e568b6" containerName="proxy-httpd" containerID="cri-o://73bfac78764299ab2b8b9302430bca90ecedaa10df4b93eb8c976fe0582a6af1" gracePeriod=30 Mar 20 16:01:14 crc kubenswrapper[4730]: I0320 16:01:14.897459 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe109bf0-70d2-41d2-855c-6eb862e568b6" containerName="sg-core" containerID="cri-o://786b95c139b35cbde05b7814738f76287b10b59ef0dc76fe0f5bcee037ab03c4" gracePeriod=30 Mar 20 16:01:14 crc kubenswrapper[4730]: I0320 16:01:14.897516 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe109bf0-70d2-41d2-855c-6eb862e568b6" containerName="ceilometer-notification-agent" containerID="cri-o://8853aa1f17e6388ce020212c8d73958c09bbf6fcc38c4d043313ee458cbde4ad" gracePeriod=30 Mar 20 16:01:14 crc kubenswrapper[4730]: I0320 16:01:14.907231 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="fe109bf0-70d2-41d2-855c-6eb862e568b6" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.183:3000/\": EOF" Mar 20 16:01:15 crc kubenswrapper[4730]: I0320 16:01:15.654922 4730 generic.go:334] "Generic (PLEG): container finished" podID="fe109bf0-70d2-41d2-855c-6eb862e568b6" containerID="73bfac78764299ab2b8b9302430bca90ecedaa10df4b93eb8c976fe0582a6af1" exitCode=0 Mar 20 16:01:15 crc kubenswrapper[4730]: I0320 16:01:15.655292 4730 generic.go:334] "Generic (PLEG): container finished" podID="fe109bf0-70d2-41d2-855c-6eb862e568b6" containerID="786b95c139b35cbde05b7814738f76287b10b59ef0dc76fe0f5bcee037ab03c4" exitCode=2 Mar 20 16:01:15 crc kubenswrapper[4730]: I0320 16:01:15.654998 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe109bf0-70d2-41d2-855c-6eb862e568b6","Type":"ContainerDied","Data":"73bfac78764299ab2b8b9302430bca90ecedaa10df4b93eb8c976fe0582a6af1"} Mar 20 16:01:15 crc kubenswrapper[4730]: I0320 16:01:15.655337 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe109bf0-70d2-41d2-855c-6eb862e568b6","Type":"ContainerDied","Data":"786b95c139b35cbde05b7814738f76287b10b59ef0dc76fe0f5bcee037ab03c4"} Mar 20 16:01:15 crc kubenswrapper[4730]: I0320 16:01:15.655352 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe109bf0-70d2-41d2-855c-6eb862e568b6","Type":"ContainerDied","Data":"c73d9cedf11e6a3a273cb136651dcbf0aa00b21555ddca3a8c2b2551a6375a21"} Mar 20 16:01:15 crc kubenswrapper[4730]: I0320 16:01:15.655304 4730 generic.go:334] "Generic (PLEG): container finished" podID="fe109bf0-70d2-41d2-855c-6eb862e568b6" containerID="c73d9cedf11e6a3a273cb136651dcbf0aa00b21555ddca3a8c2b2551a6375a21" exitCode=0 Mar 20 16:01:15 crc kubenswrapper[4730]: I0320 16:01:15.657885 4730 generic.go:334] "Generic (PLEG): container finished" podID="14cdd4b7-7a81-469f-ae2f-104b054cc583" containerID="8f1291a8157b5fac6e0adc6431d157cb584ae26777485369c5a717b5d22da62c" exitCode=0 Mar 20 16:01:15 crc kubenswrapper[4730]: I0320 16:01:15.657910 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"14cdd4b7-7a81-469f-ae2f-104b054cc583","Type":"ContainerDied","Data":"8f1291a8157b5fac6e0adc6431d157cb584ae26777485369c5a717b5d22da62c"} Mar 20 16:01:15 crc kubenswrapper[4730]: I0320 16:01:15.934540 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6858c8d8f6-k4smz" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.246520 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-tv4tn"] Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.247841 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tv4tn" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.259027 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-tv4tn"] Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.267699 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b68m2\" (UniqueName: \"kubernetes.io/projected/dac41622-7c80-4fce-a5ac-8a04d301669d-kube-api-access-b68m2\") pod \"nova-api-db-create-tv4tn\" (UID: \"dac41622-7c80-4fce-a5ac-8a04d301669d\") " pod="openstack/nova-api-db-create-tv4tn" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.267772 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac41622-7c80-4fce-a5ac-8a04d301669d-operator-scripts\") pod \"nova-api-db-create-tv4tn\" (UID: \"dac41622-7c80-4fce-a5ac-8a04d301669d\") " pod="openstack/nova-api-db-create-tv4tn" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.332039 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-rlp9c"] Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.333211 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rlp9c" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.355429 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rlp9c"] Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.369466 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac41622-7c80-4fce-a5ac-8a04d301669d-operator-scripts\") pod \"nova-api-db-create-tv4tn\" (UID: \"dac41622-7c80-4fce-a5ac-8a04d301669d\") " pod="openstack/nova-api-db-create-tv4tn" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.369790 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rq8d\" (UniqueName: \"kubernetes.io/projected/475a52ba-bc8d-4c7b-ae99-330d6ec2b358-kube-api-access-7rq8d\") pod \"nova-cell0-db-create-rlp9c\" (UID: \"475a52ba-bc8d-4c7b-ae99-330d6ec2b358\") " pod="openstack/nova-cell0-db-create-rlp9c" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.369943 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/475a52ba-bc8d-4c7b-ae99-330d6ec2b358-operator-scripts\") pod \"nova-cell0-db-create-rlp9c\" (UID: \"475a52ba-bc8d-4c7b-ae99-330d6ec2b358\") " pod="openstack/nova-cell0-db-create-rlp9c" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.369985 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b68m2\" (UniqueName: \"kubernetes.io/projected/dac41622-7c80-4fce-a5ac-8a04d301669d-kube-api-access-b68m2\") pod \"nova-api-db-create-tv4tn\" (UID: \"dac41622-7c80-4fce-a5ac-8a04d301669d\") " pod="openstack/nova-api-db-create-tv4tn" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.370880 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac41622-7c80-4fce-a5ac-8a04d301669d-operator-scripts\") pod \"nova-api-db-create-tv4tn\" (UID: \"dac41622-7c80-4fce-a5ac-8a04d301669d\") " pod="openstack/nova-api-db-create-tv4tn" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.400973 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b68m2\" (UniqueName: \"kubernetes.io/projected/dac41622-7c80-4fce-a5ac-8a04d301669d-kube-api-access-b68m2\") pod \"nova-api-db-create-tv4tn\" (UID: \"dac41622-7c80-4fce-a5ac-8a04d301669d\") " pod="openstack/nova-api-db-create-tv4tn" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.434557 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-qt4mz"] Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.435866 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qt4mz" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.460541 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-qt4mz"] Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.473410 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1-operator-scripts\") pod \"nova-cell1-db-create-qt4mz\" (UID: \"6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1\") " pod="openstack/nova-cell1-db-create-qt4mz" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.473492 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rq8d\" (UniqueName: \"kubernetes.io/projected/475a52ba-bc8d-4c7b-ae99-330d6ec2b358-kube-api-access-7rq8d\") pod \"nova-cell0-db-create-rlp9c\" (UID: \"475a52ba-bc8d-4c7b-ae99-330d6ec2b358\") " pod="openstack/nova-cell0-db-create-rlp9c" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.473555 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/475a52ba-bc8d-4c7b-ae99-330d6ec2b358-operator-scripts\") pod \"nova-cell0-db-create-rlp9c\" (UID: \"475a52ba-bc8d-4c7b-ae99-330d6ec2b358\") " pod="openstack/nova-cell0-db-create-rlp9c" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.473604 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkb5p\" (UniqueName: \"kubernetes.io/projected/6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1-kube-api-access-fkb5p\") pod \"nova-cell1-db-create-qt4mz\" (UID: \"6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1\") " pod="openstack/nova-cell1-db-create-qt4mz" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.474467 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/475a52ba-bc8d-4c7b-ae99-330d6ec2b358-operator-scripts\") pod \"nova-cell0-db-create-rlp9c\" (UID: \"475a52ba-bc8d-4c7b-ae99-330d6ec2b358\") " pod="openstack/nova-cell0-db-create-rlp9c" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.475998 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-d0d2-account-create-update-z6v46"] Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.477225 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d0d2-account-create-update-z6v46" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.479012 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.493439 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rq8d\" (UniqueName: \"kubernetes.io/projected/475a52ba-bc8d-4c7b-ae99-330d6ec2b358-kube-api-access-7rq8d\") pod \"nova-cell0-db-create-rlp9c\" (UID: \"475a52ba-bc8d-4c7b-ae99-330d6ec2b358\") " pod="openstack/nova-cell0-db-create-rlp9c" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.507017 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d0d2-account-create-update-z6v46"] Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.575468 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/383cf79a-0636-4175-bcf8-7e369f101901-operator-scripts\") pod \"nova-api-d0d2-account-create-update-z6v46\" (UID: \"383cf79a-0636-4175-bcf8-7e369f101901\") " pod="openstack/nova-api-d0d2-account-create-update-z6v46" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.575577 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkb5p\" (UniqueName: \"kubernetes.io/projected/6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1-kube-api-access-fkb5p\") pod \"nova-cell1-db-create-qt4mz\" (UID: \"6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1\") " pod="openstack/nova-cell1-db-create-qt4mz" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.575700 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1-operator-scripts\") pod \"nova-cell1-db-create-qt4mz\" (UID: \"6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1\") " pod="openstack/nova-cell1-db-create-qt4mz" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.575792 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkr7g\" (UniqueName: \"kubernetes.io/projected/383cf79a-0636-4175-bcf8-7e369f101901-kube-api-access-fkr7g\") pod \"nova-api-d0d2-account-create-update-z6v46\" (UID: \"383cf79a-0636-4175-bcf8-7e369f101901\") " pod="openstack/nova-api-d0d2-account-create-update-z6v46" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.577029 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1-operator-scripts\") pod \"nova-cell1-db-create-qt4mz\" (UID: \"6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1\") " pod="openstack/nova-cell1-db-create-qt4mz" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.589053 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tv4tn" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.605773 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkb5p\" (UniqueName: \"kubernetes.io/projected/6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1-kube-api-access-fkb5p\") pod \"nova-cell1-db-create-qt4mz\" (UID: \"6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1\") " pod="openstack/nova-cell1-db-create-qt4mz" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.644637 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-2850-account-create-update-4lrrq"] Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.645897 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2850-account-create-update-4lrrq" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.651526 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.656068 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2850-account-create-update-4lrrq"] Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.657682 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rlp9c" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.682922 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f625a9e-a940-476b-85b2-ff54c5e87785-operator-scripts\") pod \"nova-cell0-2850-account-create-update-4lrrq\" (UID: \"3f625a9e-a940-476b-85b2-ff54c5e87785\") " pod="openstack/nova-cell0-2850-account-create-update-4lrrq" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.683071 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkr7g\" (UniqueName: \"kubernetes.io/projected/383cf79a-0636-4175-bcf8-7e369f101901-kube-api-access-fkr7g\") pod \"nova-api-d0d2-account-create-update-z6v46\" (UID: \"383cf79a-0636-4175-bcf8-7e369f101901\") " pod="openstack/nova-api-d0d2-account-create-update-z6v46" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.683163 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/383cf79a-0636-4175-bcf8-7e369f101901-operator-scripts\") pod \"nova-api-d0d2-account-create-update-z6v46\" (UID: \"383cf79a-0636-4175-bcf8-7e369f101901\") " pod="openstack/nova-api-d0d2-account-create-update-z6v46" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.683263 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25qhp\" (UniqueName: \"kubernetes.io/projected/3f625a9e-a940-476b-85b2-ff54c5e87785-kube-api-access-25qhp\") pod \"nova-cell0-2850-account-create-update-4lrrq\" (UID: \"3f625a9e-a940-476b-85b2-ff54c5e87785\") " pod="openstack/nova-cell0-2850-account-create-update-4lrrq" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.684121 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/383cf79a-0636-4175-bcf8-7e369f101901-operator-scripts\") pod \"nova-api-d0d2-account-create-update-z6v46\" (UID: \"383cf79a-0636-4175-bcf8-7e369f101901\") " pod="openstack/nova-api-d0d2-account-create-update-z6v46" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.688680 4730 generic.go:334] "Generic (PLEG): container finished" podID="3f6c808e-d523-48bd-8ec2-28b625834317" containerID="ca3ac5b513d25322badcc2bf19b245d687c9ccf8bff6c35cf5794c95ec2ab964" exitCode=1 Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.688721 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3f6c808e-d523-48bd-8ec2-28b625834317","Type":"ContainerDied","Data":"ca3ac5b513d25322badcc2bf19b245d687c9ccf8bff6c35cf5794c95ec2ab964"} Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.688753 4730 scope.go:117] "RemoveContainer" containerID="22c1fa447a9712d22a3477c2b5b4f81ffbfd58601afde3e8b272d15c3b1ac1ce" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.689797 4730 scope.go:117] "RemoveContainer" containerID="ca3ac5b513d25322badcc2bf19b245d687c9ccf8bff6c35cf5794c95ec2ab964" Mar 20 16:01:16 crc kubenswrapper[4730]: E0320 16:01:16.690015 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(3f6c808e-d523-48bd-8ec2-28b625834317)\"" pod="openstack/watcher-decision-engine-0" podUID="3f6c808e-d523-48bd-8ec2-28b625834317" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.706432 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkr7g\" (UniqueName: \"kubernetes.io/projected/383cf79a-0636-4175-bcf8-7e369f101901-kube-api-access-fkr7g\") pod \"nova-api-d0d2-account-create-update-z6v46\" (UID: \"383cf79a-0636-4175-bcf8-7e369f101901\") " pod="openstack/nova-api-d0d2-account-create-update-z6v46" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.782964 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qt4mz" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.784350 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25qhp\" (UniqueName: \"kubernetes.io/projected/3f625a9e-a940-476b-85b2-ff54c5e87785-kube-api-access-25qhp\") pod \"nova-cell0-2850-account-create-update-4lrrq\" (UID: \"3f625a9e-a940-476b-85b2-ff54c5e87785\") " pod="openstack/nova-cell0-2850-account-create-update-4lrrq" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.784441 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f625a9e-a940-476b-85b2-ff54c5e87785-operator-scripts\") pod \"nova-cell0-2850-account-create-update-4lrrq\" (UID: \"3f625a9e-a940-476b-85b2-ff54c5e87785\") " pod="openstack/nova-cell0-2850-account-create-update-4lrrq" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.786055 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f625a9e-a940-476b-85b2-ff54c5e87785-operator-scripts\") pod \"nova-cell0-2850-account-create-update-4lrrq\" (UID: \"3f625a9e-a940-476b-85b2-ff54c5e87785\") " pod="openstack/nova-cell0-2850-account-create-update-4lrrq" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.814880 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25qhp\" (UniqueName: \"kubernetes.io/projected/3f625a9e-a940-476b-85b2-ff54c5e87785-kube-api-access-25qhp\") pod \"nova-cell0-2850-account-create-update-4lrrq\" (UID: \"3f625a9e-a940-476b-85b2-ff54c5e87785\") " pod="openstack/nova-cell0-2850-account-create-update-4lrrq" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.832393 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-4a43-account-create-update-cj4kg"] Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.833595 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4a43-account-create-update-cj4kg" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.836237 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d0d2-account-create-update-z6v46" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.836855 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.842193 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4a43-account-create-update-cj4kg"] Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.886087 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdzxp\" (UniqueName: \"kubernetes.io/projected/7de61c5d-53ba-4d26-9a79-b82c2bc3b779-kube-api-access-rdzxp\") pod \"nova-cell1-4a43-account-create-update-cj4kg\" (UID: \"7de61c5d-53ba-4d26-9a79-b82c2bc3b779\") " pod="openstack/nova-cell1-4a43-account-create-update-cj4kg" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.886125 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7de61c5d-53ba-4d26-9a79-b82c2bc3b779-operator-scripts\") pod \"nova-cell1-4a43-account-create-update-cj4kg\" (UID: \"7de61c5d-53ba-4d26-9a79-b82c2bc3b779\") " pod="openstack/nova-cell1-4a43-account-create-update-cj4kg" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.949234 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7c5c8ffdd9-xpfhf"] Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.951053 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.953992 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.954549 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.955122 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.967974 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7c5c8ffdd9-xpfhf"] Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.989332 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdzxp\" (UniqueName: \"kubernetes.io/projected/7de61c5d-53ba-4d26-9a79-b82c2bc3b779-kube-api-access-rdzxp\") pod \"nova-cell1-4a43-account-create-update-cj4kg\" (UID: \"7de61c5d-53ba-4d26-9a79-b82c2bc3b779\") " pod="openstack/nova-cell1-4a43-account-create-update-cj4kg" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.989387 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7de61c5d-53ba-4d26-9a79-b82c2bc3b779-operator-scripts\") pod \"nova-cell1-4a43-account-create-update-cj4kg\" (UID: \"7de61c5d-53ba-4d26-9a79-b82c2bc3b779\") " pod="openstack/nova-cell1-4a43-account-create-update-cj4kg" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.990119 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7de61c5d-53ba-4d26-9a79-b82c2bc3b779-operator-scripts\") pod \"nova-cell1-4a43-account-create-update-cj4kg\" (UID: \"7de61c5d-53ba-4d26-9a79-b82c2bc3b779\") " pod="openstack/nova-cell1-4a43-account-create-update-cj4kg" Mar 20 16:01:16 crc kubenswrapper[4730]: I0320 16:01:16.992363 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2850-account-create-update-4lrrq" Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.007062 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdzxp\" (UniqueName: \"kubernetes.io/projected/7de61c5d-53ba-4d26-9a79-b82c2bc3b779-kube-api-access-rdzxp\") pod \"nova-cell1-4a43-account-create-update-cj4kg\" (UID: \"7de61c5d-53ba-4d26-9a79-b82c2bc3b779\") " pod="openstack/nova-cell1-4a43-account-create-update-cj4kg" Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.091355 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9780622-27f3-4339-8107-321feed5e25b-internal-tls-certs\") pod \"swift-proxy-7c5c8ffdd9-xpfhf\" (UID: \"b9780622-27f3-4339-8107-321feed5e25b\") " pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.091422 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9780622-27f3-4339-8107-321feed5e25b-config-data\") pod \"swift-proxy-7c5c8ffdd9-xpfhf\" (UID: \"b9780622-27f3-4339-8107-321feed5e25b\") " pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.091441 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sfs6\" (UniqueName: \"kubernetes.io/projected/b9780622-27f3-4339-8107-321feed5e25b-kube-api-access-8sfs6\") pod \"swift-proxy-7c5c8ffdd9-xpfhf\" (UID: \"b9780622-27f3-4339-8107-321feed5e25b\") " pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.091460 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9780622-27f3-4339-8107-321feed5e25b-public-tls-certs\") pod \"swift-proxy-7c5c8ffdd9-xpfhf\" (UID: \"b9780622-27f3-4339-8107-321feed5e25b\") " pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.092420 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9780622-27f3-4339-8107-321feed5e25b-combined-ca-bundle\") pod \"swift-proxy-7c5c8ffdd9-xpfhf\" (UID: \"b9780622-27f3-4339-8107-321feed5e25b\") " pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.092665 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9780622-27f3-4339-8107-321feed5e25b-log-httpd\") pod \"swift-proxy-7c5c8ffdd9-xpfhf\" (UID: \"b9780622-27f3-4339-8107-321feed5e25b\") " pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.092732 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b9780622-27f3-4339-8107-321feed5e25b-etc-swift\") pod \"swift-proxy-7c5c8ffdd9-xpfhf\" (UID: \"b9780622-27f3-4339-8107-321feed5e25b\") " pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.092771 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9780622-27f3-4339-8107-321feed5e25b-run-httpd\") pod \"swift-proxy-7c5c8ffdd9-xpfhf\" (UID: \"b9780622-27f3-4339-8107-321feed5e25b\") " pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.164953 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.165239 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="82401c9f-f5f9-4bc6-a085-c89d3632493e" containerName="glance-log" containerID="cri-o://d342f4f373c3a46fb291e5160cd15525a5be1018e68f010945cfba9de11fd3fe" gracePeriod=30 Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.165347 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="82401c9f-f5f9-4bc6-a085-c89d3632493e" containerName="glance-httpd" containerID="cri-o://68b0b2752749a64e7ce292cfa6aabcc6400dcc4552e165b090d093ce63fe5a35" gracePeriod=30 Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.193496 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4a43-account-create-update-cj4kg" Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.194008 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9780622-27f3-4339-8107-321feed5e25b-combined-ca-bundle\") pod \"swift-proxy-7c5c8ffdd9-xpfhf\" (UID: \"b9780622-27f3-4339-8107-321feed5e25b\") " pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.194081 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9780622-27f3-4339-8107-321feed5e25b-log-httpd\") pod \"swift-proxy-7c5c8ffdd9-xpfhf\" (UID: \"b9780622-27f3-4339-8107-321feed5e25b\") " pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.194104 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b9780622-27f3-4339-8107-321feed5e25b-etc-swift\") pod \"swift-proxy-7c5c8ffdd9-xpfhf\" (UID: \"b9780622-27f3-4339-8107-321feed5e25b\") " pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.194126 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9780622-27f3-4339-8107-321feed5e25b-run-httpd\") pod \"swift-proxy-7c5c8ffdd9-xpfhf\" (UID: \"b9780622-27f3-4339-8107-321feed5e25b\") " pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.194179 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9780622-27f3-4339-8107-321feed5e25b-internal-tls-certs\") pod \"swift-proxy-7c5c8ffdd9-xpfhf\" (UID: \"b9780622-27f3-4339-8107-321feed5e25b\") " pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.194216 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9780622-27f3-4339-8107-321feed5e25b-config-data\") pod \"swift-proxy-7c5c8ffdd9-xpfhf\" (UID: \"b9780622-27f3-4339-8107-321feed5e25b\") " pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.194233 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sfs6\" (UniqueName: \"kubernetes.io/projected/b9780622-27f3-4339-8107-321feed5e25b-kube-api-access-8sfs6\") pod \"swift-proxy-7c5c8ffdd9-xpfhf\" (UID: \"b9780622-27f3-4339-8107-321feed5e25b\") " pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.194453 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9780622-27f3-4339-8107-321feed5e25b-public-tls-certs\") pod \"swift-proxy-7c5c8ffdd9-xpfhf\" (UID: \"b9780622-27f3-4339-8107-321feed5e25b\") " pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.194631 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9780622-27f3-4339-8107-321feed5e25b-log-httpd\") pod \"swift-proxy-7c5c8ffdd9-xpfhf\" (UID: \"b9780622-27f3-4339-8107-321feed5e25b\") " pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.194737 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9780622-27f3-4339-8107-321feed5e25b-run-httpd\") pod \"swift-proxy-7c5c8ffdd9-xpfhf\" (UID: \"b9780622-27f3-4339-8107-321feed5e25b\") " pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.197865 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9780622-27f3-4339-8107-321feed5e25b-combined-ca-bundle\") pod \"swift-proxy-7c5c8ffdd9-xpfhf\" (UID: \"b9780622-27f3-4339-8107-321feed5e25b\") " pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.197957 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b9780622-27f3-4339-8107-321feed5e25b-etc-swift\") pod \"swift-proxy-7c5c8ffdd9-xpfhf\" (UID: \"b9780622-27f3-4339-8107-321feed5e25b\") " pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.199430 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9780622-27f3-4339-8107-321feed5e25b-public-tls-certs\") pod \"swift-proxy-7c5c8ffdd9-xpfhf\" (UID: \"b9780622-27f3-4339-8107-321feed5e25b\") " pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.200231 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9780622-27f3-4339-8107-321feed5e25b-config-data\") pod \"swift-proxy-7c5c8ffdd9-xpfhf\" (UID: \"b9780622-27f3-4339-8107-321feed5e25b\") " pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.200733 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9780622-27f3-4339-8107-321feed5e25b-internal-tls-certs\") pod \"swift-proxy-7c5c8ffdd9-xpfhf\" (UID: \"b9780622-27f3-4339-8107-321feed5e25b\") " pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.232136 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sfs6\" (UniqueName: \"kubernetes.io/projected/b9780622-27f3-4339-8107-321feed5e25b-kube-api-access-8sfs6\") pod \"swift-proxy-7c5c8ffdd9-xpfhf\" (UID: \"b9780622-27f3-4339-8107-321feed5e25b\") " pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.266180 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.711008 4730 generic.go:334] "Generic (PLEG): container finished" podID="82401c9f-f5f9-4bc6-a085-c89d3632493e" containerID="d342f4f373c3a46fb291e5160cd15525a5be1018e68f010945cfba9de11fd3fe" exitCode=143 Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.711094 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"82401c9f-f5f9-4bc6-a085-c89d3632493e","Type":"ContainerDied","Data":"d342f4f373c3a46fb291e5160cd15525a5be1018e68f010945cfba9de11fd3fe"} Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.722522 4730 generic.go:334] "Generic (PLEG): container finished" podID="fe109bf0-70d2-41d2-855c-6eb862e568b6" containerID="8853aa1f17e6388ce020212c8d73958c09bbf6fcc38c4d043313ee458cbde4ad" exitCode=0 Mar 20 16:01:17 crc kubenswrapper[4730]: I0320 16:01:17.722567 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe109bf0-70d2-41d2-855c-6eb862e568b6","Type":"ContainerDied","Data":"8853aa1f17e6388ce020212c8d73958c09bbf6fcc38c4d043313ee458cbde4ad"} Mar 20 16:01:18 crc kubenswrapper[4730]: I0320 16:01:18.744991 4730 generic.go:334] "Generic (PLEG): container finished" podID="82401c9f-f5f9-4bc6-a085-c89d3632493e" containerID="68b0b2752749a64e7ce292cfa6aabcc6400dcc4552e165b090d093ce63fe5a35" exitCode=0 Mar 20 16:01:18 crc kubenswrapper[4730]: I0320 16:01:18.745165 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"82401c9f-f5f9-4bc6-a085-c89d3632493e","Type":"ContainerDied","Data":"68b0b2752749a64e7ce292cfa6aabcc6400dcc4552e165b090d093ce63fe5a35"} Mar 20 16:01:19 crc kubenswrapper[4730]: I0320 16:01:19.974325 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 20 16:01:19 crc kubenswrapper[4730]: I0320 16:01:19.974382 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 20 16:01:19 crc kubenswrapper[4730]: I0320 16:01:19.975151 4730 scope.go:117] "RemoveContainer" containerID="ca3ac5b513d25322badcc2bf19b245d687c9ccf8bff6c35cf5794c95ec2ab964" Mar 20 16:01:19 crc kubenswrapper[4730]: E0320 16:01:19.975518 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(3f6c808e-d523-48bd-8ec2-28b625834317)\"" pod="openstack/watcher-decision-engine-0" podUID="3f6c808e-d523-48bd-8ec2-28b625834317" Mar 20 16:01:21 crc kubenswrapper[4730]: I0320 16:01:21.962059 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.005629 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14cdd4b7-7a81-469f-ae2f-104b054cc583-combined-ca-bundle\") pod \"14cdd4b7-7a81-469f-ae2f-104b054cc583\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.005912 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14cdd4b7-7a81-469f-ae2f-104b054cc583-httpd-run\") pod \"14cdd4b7-7a81-469f-ae2f-104b054cc583\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.005963 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14cdd4b7-7a81-469f-ae2f-104b054cc583-logs\") pod \"14cdd4b7-7a81-469f-ae2f-104b054cc583\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.005987 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14cdd4b7-7a81-469f-ae2f-104b054cc583-internal-tls-certs\") pod \"14cdd4b7-7a81-469f-ae2f-104b054cc583\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.006013 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14cdd4b7-7a81-469f-ae2f-104b054cc583-scripts\") pod \"14cdd4b7-7a81-469f-ae2f-104b054cc583\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.006035 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-566vl\" (UniqueName: \"kubernetes.io/projected/14cdd4b7-7a81-469f-ae2f-104b054cc583-kube-api-access-566vl\") pod \"14cdd4b7-7a81-469f-ae2f-104b054cc583\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.006081 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"14cdd4b7-7a81-469f-ae2f-104b054cc583\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.006113 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14cdd4b7-7a81-469f-ae2f-104b054cc583-config-data\") pod \"14cdd4b7-7a81-469f-ae2f-104b054cc583\" (UID: \"14cdd4b7-7a81-469f-ae2f-104b054cc583\") " Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.008485 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14cdd4b7-7a81-469f-ae2f-104b054cc583-logs" (OuterVolumeSpecName: "logs") pod "14cdd4b7-7a81-469f-ae2f-104b054cc583" (UID: "14cdd4b7-7a81-469f-ae2f-104b054cc583"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.009356 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14cdd4b7-7a81-469f-ae2f-104b054cc583-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "14cdd4b7-7a81-469f-ae2f-104b054cc583" (UID: "14cdd4b7-7a81-469f-ae2f-104b054cc583"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.021111 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14cdd4b7-7a81-469f-ae2f-104b054cc583-scripts" (OuterVolumeSpecName: "scripts") pod "14cdd4b7-7a81-469f-ae2f-104b054cc583" (UID: "14cdd4b7-7a81-469f-ae2f-104b054cc583"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.029526 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14cdd4b7-7a81-469f-ae2f-104b054cc583-kube-api-access-566vl" (OuterVolumeSpecName: "kube-api-access-566vl") pod "14cdd4b7-7a81-469f-ae2f-104b054cc583" (UID: "14cdd4b7-7a81-469f-ae2f-104b054cc583"). InnerVolumeSpecName "kube-api-access-566vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.030846 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "14cdd4b7-7a81-469f-ae2f-104b054cc583" (UID: "14cdd4b7-7a81-469f-ae2f-104b054cc583"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.046232 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14cdd4b7-7a81-469f-ae2f-104b054cc583-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14cdd4b7-7a81-469f-ae2f-104b054cc583" (UID: "14cdd4b7-7a81-469f-ae2f-104b054cc583"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.066673 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14cdd4b7-7a81-469f-ae2f-104b054cc583-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "14cdd4b7-7a81-469f-ae2f-104b054cc583" (UID: "14cdd4b7-7a81-469f-ae2f-104b054cc583"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.103369 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14cdd4b7-7a81-469f-ae2f-104b054cc583-config-data" (OuterVolumeSpecName: "config-data") pod "14cdd4b7-7a81-469f-ae2f-104b054cc583" (UID: "14cdd4b7-7a81-469f-ae2f-104b054cc583"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.108883 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14cdd4b7-7a81-469f-ae2f-104b054cc583-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.108922 4730 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14cdd4b7-7a81-469f-ae2f-104b054cc583-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.108933 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14cdd4b7-7a81-469f-ae2f-104b054cc583-logs\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.108943 4730 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14cdd4b7-7a81-469f-ae2f-104b054cc583-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.108952 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14cdd4b7-7a81-469f-ae2f-104b054cc583-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.108964 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-566vl\" (UniqueName: \"kubernetes.io/projected/14cdd4b7-7a81-469f-ae2f-104b054cc583-kube-api-access-566vl\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.108997 4730 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.109007 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14cdd4b7-7a81-469f-ae2f-104b054cc583-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.135962 4730 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.211402 4730 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.448640 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.465023 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.482508 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4a43-account-create-update-cj4kg"] Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.511658 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5dc7dd859f-wtxnj" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.618365 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe109bf0-70d2-41d2-855c-6eb862e568b6-config-data\") pod \"fe109bf0-70d2-41d2-855c-6eb862e568b6\" (UID: \"fe109bf0-70d2-41d2-855c-6eb862e568b6\") " Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.618430 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe109bf0-70d2-41d2-855c-6eb862e568b6-combined-ca-bundle\") pod \"fe109bf0-70d2-41d2-855c-6eb862e568b6\" (UID: \"fe109bf0-70d2-41d2-855c-6eb862e568b6\") " Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.618481 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82401c9f-f5f9-4bc6-a085-c89d3632493e-combined-ca-bundle\") pod \"82401c9f-f5f9-4bc6-a085-c89d3632493e\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.618512 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/82401c9f-f5f9-4bc6-a085-c89d3632493e-httpd-run\") pod \"82401c9f-f5f9-4bc6-a085-c89d3632493e\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.618539 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdtvc\" (UniqueName: \"kubernetes.io/projected/fe109bf0-70d2-41d2-855c-6eb862e568b6-kube-api-access-vdtvc\") pod \"fe109bf0-70d2-41d2-855c-6eb862e568b6\" (UID: \"fe109bf0-70d2-41d2-855c-6eb862e568b6\") " Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.618559 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"82401c9f-f5f9-4bc6-a085-c89d3632493e\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.618578 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82401c9f-f5f9-4bc6-a085-c89d3632493e-public-tls-certs\") pod \"82401c9f-f5f9-4bc6-a085-c89d3632493e\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.618599 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82401c9f-f5f9-4bc6-a085-c89d3632493e-logs\") pod \"82401c9f-f5f9-4bc6-a085-c89d3632493e\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.618630 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe109bf0-70d2-41d2-855c-6eb862e568b6-scripts\") pod \"fe109bf0-70d2-41d2-855c-6eb862e568b6\" (UID: \"fe109bf0-70d2-41d2-855c-6eb862e568b6\") " Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.618651 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe109bf0-70d2-41d2-855c-6eb862e568b6-sg-core-conf-yaml\") pod \"fe109bf0-70d2-41d2-855c-6eb862e568b6\" (UID: \"fe109bf0-70d2-41d2-855c-6eb862e568b6\") " Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.618686 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe109bf0-70d2-41d2-855c-6eb862e568b6-run-httpd\") pod \"fe109bf0-70d2-41d2-855c-6eb862e568b6\" (UID: \"fe109bf0-70d2-41d2-855c-6eb862e568b6\") " Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.618707 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe109bf0-70d2-41d2-855c-6eb862e568b6-log-httpd\") pod \"fe109bf0-70d2-41d2-855c-6eb862e568b6\" (UID: \"fe109bf0-70d2-41d2-855c-6eb862e568b6\") " Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.618803 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82401c9f-f5f9-4bc6-a085-c89d3632493e-scripts\") pod \"82401c9f-f5f9-4bc6-a085-c89d3632493e\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.618822 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw75x\" (UniqueName: \"kubernetes.io/projected/82401c9f-f5f9-4bc6-a085-c89d3632493e-kube-api-access-pw75x\") pod \"82401c9f-f5f9-4bc6-a085-c89d3632493e\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.618854 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82401c9f-f5f9-4bc6-a085-c89d3632493e-config-data\") pod \"82401c9f-f5f9-4bc6-a085-c89d3632493e\" (UID: \"82401c9f-f5f9-4bc6-a085-c89d3632493e\") " Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.626575 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82401c9f-f5f9-4bc6-a085-c89d3632493e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "82401c9f-f5f9-4bc6-a085-c89d3632493e" (UID: "82401c9f-f5f9-4bc6-a085-c89d3632493e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.632307 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6858c8d8f6-k4smz"] Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.632772 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6858c8d8f6-k4smz" podUID="4ed9fad7-284f-40b4-9c3b-7a213aff010a" containerName="neutron-api" containerID="cri-o://2eff1617c29a34da6021d776b5bc5c6695025819add4af253986837526af0f15" gracePeriod=30 Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.633880 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6858c8d8f6-k4smz" podUID="4ed9fad7-284f-40b4-9c3b-7a213aff010a" containerName="neutron-httpd" containerID="cri-o://3635b09696560454a28e6d666babdb61696ccff059aecec39acea6122546c8aa" gracePeriod=30 Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.635042 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe109bf0-70d2-41d2-855c-6eb862e568b6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fe109bf0-70d2-41d2-855c-6eb862e568b6" (UID: "fe109bf0-70d2-41d2-855c-6eb862e568b6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.635601 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82401c9f-f5f9-4bc6-a085-c89d3632493e-logs" (OuterVolumeSpecName: "logs") pod "82401c9f-f5f9-4bc6-a085-c89d3632493e" (UID: "82401c9f-f5f9-4bc6-a085-c89d3632493e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.641793 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe109bf0-70d2-41d2-855c-6eb862e568b6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fe109bf0-70d2-41d2-855c-6eb862e568b6" (UID: "fe109bf0-70d2-41d2-855c-6eb862e568b6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.663880 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe109bf0-70d2-41d2-855c-6eb862e568b6-kube-api-access-vdtvc" (OuterVolumeSpecName: "kube-api-access-vdtvc") pod "fe109bf0-70d2-41d2-855c-6eb862e568b6" (UID: "fe109bf0-70d2-41d2-855c-6eb862e568b6"). InnerVolumeSpecName "kube-api-access-vdtvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.664197 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe109bf0-70d2-41d2-855c-6eb862e568b6-scripts" (OuterVolumeSpecName: "scripts") pod "fe109bf0-70d2-41d2-855c-6eb862e568b6" (UID: "fe109bf0-70d2-41d2-855c-6eb862e568b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.664456 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82401c9f-f5f9-4bc6-a085-c89d3632493e-kube-api-access-pw75x" (OuterVolumeSpecName: "kube-api-access-pw75x") pod "82401c9f-f5f9-4bc6-a085-c89d3632493e" (UID: "82401c9f-f5f9-4bc6-a085-c89d3632493e"). InnerVolumeSpecName "kube-api-access-pw75x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.664883 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82401c9f-f5f9-4bc6-a085-c89d3632493e-scripts" (OuterVolumeSpecName: "scripts") pod "82401c9f-f5f9-4bc6-a085-c89d3632493e" (UID: "82401c9f-f5f9-4bc6-a085-c89d3632493e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.683428 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "82401c9f-f5f9-4bc6-a085-c89d3632493e" (UID: "82401c9f-f5f9-4bc6-a085-c89d3632493e"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.721809 4730 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/82401c9f-f5f9-4bc6-a085-c89d3632493e-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.721842 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdtvc\" (UniqueName: \"kubernetes.io/projected/fe109bf0-70d2-41d2-855c-6eb862e568b6-kube-api-access-vdtvc\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.721867 4730 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.721876 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82401c9f-f5f9-4bc6-a085-c89d3632493e-logs\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.721886 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe109bf0-70d2-41d2-855c-6eb862e568b6-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.722055 4730 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe109bf0-70d2-41d2-855c-6eb862e568b6-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.723490 4730 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe109bf0-70d2-41d2-855c-6eb862e568b6-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.723521 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82401c9f-f5f9-4bc6-a085-c89d3632493e-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.723534 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pw75x\" (UniqueName: \"kubernetes.io/projected/82401c9f-f5f9-4bc6-a085-c89d3632493e-kube-api-access-pw75x\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.726647 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82401c9f-f5f9-4bc6-a085-c89d3632493e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82401c9f-f5f9-4bc6-a085-c89d3632493e" (UID: "82401c9f-f5f9-4bc6-a085-c89d3632493e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.752111 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe109bf0-70d2-41d2-855c-6eb862e568b6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fe109bf0-70d2-41d2-855c-6eb862e568b6" (UID: "fe109bf0-70d2-41d2-855c-6eb862e568b6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.809899 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-tv4tn"] Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.823659 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-qt4mz"] Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.826223 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82401c9f-f5f9-4bc6-a085-c89d3632493e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.826293 4730 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe109bf0-70d2-41d2-855c-6eb862e568b6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.839452 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rlp9c"] Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.850802 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2850-account-create-update-4lrrq"] Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.887001 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d0d2-account-create-update-z6v46"] Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.894222 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a893eba7-9715-4599-93c2-0365a45134e9","Type":"ContainerStarted","Data":"a9986e5fc496f5b7dd403b81f494104bc1b23a77020a7cc79e3b92ba315ed5a9"} Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.899778 4730 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.912045 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"14cdd4b7-7a81-469f-ae2f-104b054cc583","Type":"ContainerDied","Data":"656641958e552f93316749e193bcc1be09eafd97508856daa5fede226eb204fa"} Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.912122 4730 scope.go:117] "RemoveContainer" containerID="8f1291a8157b5fac6e0adc6431d157cb584ae26777485369c5a717b5d22da62c" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.912338 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.915455 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4a43-account-create-update-cj4kg" event={"ID":"7de61c5d-53ba-4d26-9a79-b82c2bc3b779","Type":"ContainerStarted","Data":"37c4808f93e82a0a4d50cb5905a0dc855da5adcf2e92087cb8a7b2e04fe7f5b5"} Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.928265 4730 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.931357 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7c5c8ffdd9-xpfhf"] Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.940241 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.650580293 podStartE2EDuration="13.940218388s" podCreationTimestamp="2026-03-20 16:01:09 +0000 UTC" firstStartedPulling="2026-03-20 16:01:10.215893499 +0000 UTC m=+1329.429264868" lastFinishedPulling="2026-03-20 16:01:21.505531594 +0000 UTC m=+1340.718902963" observedRunningTime="2026-03-20 16:01:22.911296991 +0000 UTC m=+1342.124668360" watchObservedRunningTime="2026-03-20 16:01:22.940218388 +0000 UTC m=+1342.153589757" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.942374 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"82401c9f-f5f9-4bc6-a085-c89d3632493e","Type":"ContainerDied","Data":"81ccba513b601e62e75f217051576c1c723201231f3534205e6403a320c44aa9"} Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.942460 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.955465 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe109bf0-70d2-41d2-855c-6eb862e568b6","Type":"ContainerDied","Data":"4d621236dd54101584c726bca47e76d324d3fb96b4a4404920b5e18a6a4fbb39"} Mar 20 16:01:22 crc kubenswrapper[4730]: I0320 16:01:22.955780 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.032052 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.051124 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.091146 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82401c9f-f5f9-4bc6-a085-c89d3632493e-config-data" (OuterVolumeSpecName: "config-data") pod "82401c9f-f5f9-4bc6-a085-c89d3632493e" (UID: "82401c9f-f5f9-4bc6-a085-c89d3632493e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:01:23 crc kubenswrapper[4730]: W0320 16:01:23.099526 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod383cf79a_0636_4175_bcf8_7e369f101901.slice/crio-e6bf2254b89d522962de84d44ec16f569df85b053c9d15be0ca828bf3c222139 WatchSource:0}: Error finding container e6bf2254b89d522962de84d44ec16f569df85b053c9d15be0ca828bf3c222139: Status 404 returned error can't find the container with id e6bf2254b89d522962de84d44ec16f569df85b053c9d15be0ca828bf3c222139 Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.099556 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82401c9f-f5f9-4bc6-a085-c89d3632493e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "82401c9f-f5f9-4bc6-a085-c89d3632493e" (UID: "82401c9f-f5f9-4bc6-a085-c89d3632493e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.116058 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 16:01:23 crc kubenswrapper[4730]: E0320 16:01:23.116698 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14cdd4b7-7a81-469f-ae2f-104b054cc583" containerName="glance-httpd" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.116771 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="14cdd4b7-7a81-469f-ae2f-104b054cc583" containerName="glance-httpd" Mar 20 16:01:23 crc kubenswrapper[4730]: E0320 16:01:23.116783 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82401c9f-f5f9-4bc6-a085-c89d3632493e" containerName="glance-httpd" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.116827 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="82401c9f-f5f9-4bc6-a085-c89d3632493e" containerName="glance-httpd" Mar 20 16:01:23 crc kubenswrapper[4730]: E0320 16:01:23.116856 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe109bf0-70d2-41d2-855c-6eb862e568b6" containerName="ceilometer-central-agent" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.116864 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe109bf0-70d2-41d2-855c-6eb862e568b6" containerName="ceilometer-central-agent" Mar 20 16:01:23 crc kubenswrapper[4730]: E0320 16:01:23.116918 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe109bf0-70d2-41d2-855c-6eb862e568b6" containerName="proxy-httpd" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.116926 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe109bf0-70d2-41d2-855c-6eb862e568b6" containerName="proxy-httpd" Mar 20 16:01:23 crc kubenswrapper[4730]: E0320 16:01:23.116953 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14cdd4b7-7a81-469f-ae2f-104b054cc583" containerName="glance-log" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.116960 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="14cdd4b7-7a81-469f-ae2f-104b054cc583" containerName="glance-log" Mar 20 16:01:23 crc kubenswrapper[4730]: E0320 16:01:23.116968 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe109bf0-70d2-41d2-855c-6eb862e568b6" containerName="sg-core" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.116974 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe109bf0-70d2-41d2-855c-6eb862e568b6" containerName="sg-core" Mar 20 16:01:23 crc kubenswrapper[4730]: E0320 16:01:23.116988 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82401c9f-f5f9-4bc6-a085-c89d3632493e" containerName="glance-log" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.116994 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="82401c9f-f5f9-4bc6-a085-c89d3632493e" containerName="glance-log" Mar 20 16:01:23 crc kubenswrapper[4730]: E0320 16:01:23.117004 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe109bf0-70d2-41d2-855c-6eb862e568b6" containerName="ceilometer-notification-agent" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.117010 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe109bf0-70d2-41d2-855c-6eb862e568b6" containerName="ceilometer-notification-agent" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.117175 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="82401c9f-f5f9-4bc6-a085-c89d3632493e" containerName="glance-log" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.117187 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="14cdd4b7-7a81-469f-ae2f-104b054cc583" containerName="glance-httpd" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.117196 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe109bf0-70d2-41d2-855c-6eb862e568b6" containerName="ceilometer-central-agent" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.117209 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="14cdd4b7-7a81-469f-ae2f-104b054cc583" containerName="glance-log" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.117223 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe109bf0-70d2-41d2-855c-6eb862e568b6" containerName="sg-core" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.117237 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="82401c9f-f5f9-4bc6-a085-c89d3632493e" containerName="glance-httpd" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.117258 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe109bf0-70d2-41d2-855c-6eb862e568b6" containerName="proxy-httpd" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.117269 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe109bf0-70d2-41d2-855c-6eb862e568b6" containerName="ceilometer-notification-agent" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.118396 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.120993 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.121195 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.125391 4730 scope.go:117] "RemoveContainer" containerID="1eef3a4c57a302c49282046a25ce9b6b686742d1068d8539a0c4f898222c31dc" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.133403 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82401c9f-f5f9-4bc6-a085-c89d3632493e-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.133425 4730 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82401c9f-f5f9-4bc6-a085-c89d3632493e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.156013 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.156909 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe109bf0-70d2-41d2-855c-6eb862e568b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe109bf0-70d2-41d2-855c-6eb862e568b6" (UID: "fe109bf0-70d2-41d2-855c-6eb862e568b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.236198 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84366eea-e5f9-43da-ac65-8e79cb659c0a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"84366eea-e5f9-43da-ac65-8e79cb659c0a\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.237061 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84366eea-e5f9-43da-ac65-8e79cb659c0a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"84366eea-e5f9-43da-ac65-8e79cb659c0a\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.237091 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84366eea-e5f9-43da-ac65-8e79cb659c0a-logs\") pod \"glance-default-internal-api-0\" (UID: \"84366eea-e5f9-43da-ac65-8e79cb659c0a\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.237137 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84366eea-e5f9-43da-ac65-8e79cb659c0a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"84366eea-e5f9-43da-ac65-8e79cb659c0a\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.237292 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/84366eea-e5f9-43da-ac65-8e79cb659c0a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"84366eea-e5f9-43da-ac65-8e79cb659c0a\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.237370 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j82ft\" (UniqueName: \"kubernetes.io/projected/84366eea-e5f9-43da-ac65-8e79cb659c0a-kube-api-access-j82ft\") pod \"glance-default-internal-api-0\" (UID: \"84366eea-e5f9-43da-ac65-8e79cb659c0a\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.237450 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"84366eea-e5f9-43da-ac65-8e79cb659c0a\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.237497 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84366eea-e5f9-43da-ac65-8e79cb659c0a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"84366eea-e5f9-43da-ac65-8e79cb659c0a\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.237645 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe109bf0-70d2-41d2-855c-6eb862e568b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.318337 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe109bf0-70d2-41d2-855c-6eb862e568b6-config-data" (OuterVolumeSpecName: "config-data") pod "fe109bf0-70d2-41d2-855c-6eb862e568b6" (UID: "fe109bf0-70d2-41d2-855c-6eb862e568b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.341207 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84366eea-e5f9-43da-ac65-8e79cb659c0a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"84366eea-e5f9-43da-ac65-8e79cb659c0a\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.341293 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84366eea-e5f9-43da-ac65-8e79cb659c0a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"84366eea-e5f9-43da-ac65-8e79cb659c0a\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.341320 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84366eea-e5f9-43da-ac65-8e79cb659c0a-logs\") pod \"glance-default-internal-api-0\" (UID: \"84366eea-e5f9-43da-ac65-8e79cb659c0a\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.341364 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84366eea-e5f9-43da-ac65-8e79cb659c0a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"84366eea-e5f9-43da-ac65-8e79cb659c0a\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.341479 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/84366eea-e5f9-43da-ac65-8e79cb659c0a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"84366eea-e5f9-43da-ac65-8e79cb659c0a\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.341544 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j82ft\" (UniqueName: \"kubernetes.io/projected/84366eea-e5f9-43da-ac65-8e79cb659c0a-kube-api-access-j82ft\") pod \"glance-default-internal-api-0\" (UID: \"84366eea-e5f9-43da-ac65-8e79cb659c0a\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.341585 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"84366eea-e5f9-43da-ac65-8e79cb659c0a\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.341626 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84366eea-e5f9-43da-ac65-8e79cb659c0a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"84366eea-e5f9-43da-ac65-8e79cb659c0a\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.341704 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe109bf0-70d2-41d2-855c-6eb862e568b6-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.345034 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/84366eea-e5f9-43da-ac65-8e79cb659c0a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"84366eea-e5f9-43da-ac65-8e79cb659c0a\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.345463 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"84366eea-e5f9-43da-ac65-8e79cb659c0a\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.348052 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84366eea-e5f9-43da-ac65-8e79cb659c0a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"84366eea-e5f9-43da-ac65-8e79cb659c0a\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.348311 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84366eea-e5f9-43da-ac65-8e79cb659c0a-logs\") pod \"glance-default-internal-api-0\" (UID: \"84366eea-e5f9-43da-ac65-8e79cb659c0a\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.352000 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84366eea-e5f9-43da-ac65-8e79cb659c0a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"84366eea-e5f9-43da-ac65-8e79cb659c0a\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.352236 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84366eea-e5f9-43da-ac65-8e79cb659c0a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"84366eea-e5f9-43da-ac65-8e79cb659c0a\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.363915 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84366eea-e5f9-43da-ac65-8e79cb659c0a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"84366eea-e5f9-43da-ac65-8e79cb659c0a\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.369726 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j82ft\" (UniqueName: \"kubernetes.io/projected/84366eea-e5f9-43da-ac65-8e79cb659c0a-kube-api-access-j82ft\") pod \"glance-default-internal-api-0\" (UID: \"84366eea-e5f9-43da-ac65-8e79cb659c0a\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.397302 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"84366eea-e5f9-43da-ac65-8e79cb659c0a\") " pod="openstack/glance-default-internal-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.515137 4730 scope.go:117] "RemoveContainer" containerID="68b0b2752749a64e7ce292cfa6aabcc6400dcc4552e165b090d093ce63fe5a35" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.516858 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.556406 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14cdd4b7-7a81-469f-ae2f-104b054cc583" path="/var/lib/kubelet/pods/14cdd4b7-7a81-469f-ae2f-104b054cc583/volumes" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.567740 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.591184 4730 scope.go:117] "RemoveContainer" containerID="d342f4f373c3a46fb291e5160cd15525a5be1018e68f010945cfba9de11fd3fe" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.597703 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.622801 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.675176 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.692352 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.698679 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.698907 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.773581 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pq6z\" (UniqueName: \"kubernetes.io/projected/47ed5bd7-7aa8-4f16-98de-f09e21218ae6-kube-api-access-4pq6z\") pod \"glance-default-external-api-0\" (UID: \"47ed5bd7-7aa8-4f16-98de-f09e21218ae6\") " pod="openstack/glance-default-external-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.773659 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47ed5bd7-7aa8-4f16-98de-f09e21218ae6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"47ed5bd7-7aa8-4f16-98de-f09e21218ae6\") " pod="openstack/glance-default-external-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.773680 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47ed5bd7-7aa8-4f16-98de-f09e21218ae6-config-data\") pod \"glance-default-external-api-0\" (UID: \"47ed5bd7-7aa8-4f16-98de-f09e21218ae6\") " pod="openstack/glance-default-external-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.773725 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47ed5bd7-7aa8-4f16-98de-f09e21218ae6-scripts\") pod \"glance-default-external-api-0\" (UID: \"47ed5bd7-7aa8-4f16-98de-f09e21218ae6\") " pod="openstack/glance-default-external-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.773784 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47ed5bd7-7aa8-4f16-98de-f09e21218ae6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"47ed5bd7-7aa8-4f16-98de-f09e21218ae6\") " pod="openstack/glance-default-external-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.773812 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47ed5bd7-7aa8-4f16-98de-f09e21218ae6-logs\") pod \"glance-default-external-api-0\" (UID: \"47ed5bd7-7aa8-4f16-98de-f09e21218ae6\") " pod="openstack/glance-default-external-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.773855 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"47ed5bd7-7aa8-4f16-98de-f09e21218ae6\") " pod="openstack/glance-default-external-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.773881 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47ed5bd7-7aa8-4f16-98de-f09e21218ae6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"47ed5bd7-7aa8-4f16-98de-f09e21218ae6\") " pod="openstack/glance-default-external-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.775429 4730 scope.go:117] "RemoveContainer" containerID="73bfac78764299ab2b8b9302430bca90ecedaa10df4b93eb8c976fe0582a6af1" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.878387 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47ed5bd7-7aa8-4f16-98de-f09e21218ae6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"47ed5bd7-7aa8-4f16-98de-f09e21218ae6\") " pod="openstack/glance-default-external-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.878450 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47ed5bd7-7aa8-4f16-98de-f09e21218ae6-logs\") pod \"glance-default-external-api-0\" (UID: \"47ed5bd7-7aa8-4f16-98de-f09e21218ae6\") " pod="openstack/glance-default-external-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.878519 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"47ed5bd7-7aa8-4f16-98de-f09e21218ae6\") " pod="openstack/glance-default-external-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.878556 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47ed5bd7-7aa8-4f16-98de-f09e21218ae6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"47ed5bd7-7aa8-4f16-98de-f09e21218ae6\") " pod="openstack/glance-default-external-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.878584 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pq6z\" (UniqueName: \"kubernetes.io/projected/47ed5bd7-7aa8-4f16-98de-f09e21218ae6-kube-api-access-4pq6z\") pod \"glance-default-external-api-0\" (UID: \"47ed5bd7-7aa8-4f16-98de-f09e21218ae6\") " pod="openstack/glance-default-external-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.878645 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47ed5bd7-7aa8-4f16-98de-f09e21218ae6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"47ed5bd7-7aa8-4f16-98de-f09e21218ae6\") " pod="openstack/glance-default-external-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.878669 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47ed5bd7-7aa8-4f16-98de-f09e21218ae6-config-data\") pod \"glance-default-external-api-0\" (UID: \"47ed5bd7-7aa8-4f16-98de-f09e21218ae6\") " pod="openstack/glance-default-external-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.878727 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47ed5bd7-7aa8-4f16-98de-f09e21218ae6-scripts\") pod \"glance-default-external-api-0\" (UID: \"47ed5bd7-7aa8-4f16-98de-f09e21218ae6\") " pod="openstack/glance-default-external-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.879824 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"47ed5bd7-7aa8-4f16-98de-f09e21218ae6\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.881032 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47ed5bd7-7aa8-4f16-98de-f09e21218ae6-logs\") pod \"glance-default-external-api-0\" (UID: \"47ed5bd7-7aa8-4f16-98de-f09e21218ae6\") " pod="openstack/glance-default-external-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.881278 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47ed5bd7-7aa8-4f16-98de-f09e21218ae6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"47ed5bd7-7aa8-4f16-98de-f09e21218ae6\") " pod="openstack/glance-default-external-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.905947 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47ed5bd7-7aa8-4f16-98de-f09e21218ae6-config-data\") pod \"glance-default-external-api-0\" (UID: \"47ed5bd7-7aa8-4f16-98de-f09e21218ae6\") " pod="openstack/glance-default-external-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.907882 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pq6z\" (UniqueName: \"kubernetes.io/projected/47ed5bd7-7aa8-4f16-98de-f09e21218ae6-kube-api-access-4pq6z\") pod \"glance-default-external-api-0\" (UID: \"47ed5bd7-7aa8-4f16-98de-f09e21218ae6\") " pod="openstack/glance-default-external-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.908329 4730 scope.go:117] "RemoveContainer" containerID="786b95c139b35cbde05b7814738f76287b10b59ef0dc76fe0f5bcee037ab03c4" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.915978 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47ed5bd7-7aa8-4f16-98de-f09e21218ae6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"47ed5bd7-7aa8-4f16-98de-f09e21218ae6\") " pod="openstack/glance-default-external-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.917160 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47ed5bd7-7aa8-4f16-98de-f09e21218ae6-scripts\") pod \"glance-default-external-api-0\" (UID: \"47ed5bd7-7aa8-4f16-98de-f09e21218ae6\") " pod="openstack/glance-default-external-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.918367 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47ed5bd7-7aa8-4f16-98de-f09e21218ae6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"47ed5bd7-7aa8-4f16-98de-f09e21218ae6\") " pod="openstack/glance-default-external-api-0" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.959383 4730 scope.go:117] "RemoveContainer" containerID="8853aa1f17e6388ce020212c8d73958c09bbf6fcc38c4d043313ee458cbde4ad" Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.984838 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" event={"ID":"b9780622-27f3-4339-8107-321feed5e25b","Type":"ContainerStarted","Data":"6737220c431a730c25e8b5fa82bece6085b864fef5ef8bc86899d62afa13f2b7"} Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.995426 4730 generic.go:334] "Generic (PLEG): container finished" podID="4ed9fad7-284f-40b4-9c3b-7a213aff010a" containerID="3635b09696560454a28e6d666babdb61696ccff059aecec39acea6122546c8aa" exitCode=0 Mar 20 16:01:23 crc kubenswrapper[4730]: I0320 16:01:23.995727 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6858c8d8f6-k4smz" event={"ID":"4ed9fad7-284f-40b4-9c3b-7a213aff010a","Type":"ContainerDied","Data":"3635b09696560454a28e6d666babdb61696ccff059aecec39acea6122546c8aa"} Mar 20 16:01:24 crc kubenswrapper[4730]: I0320 16:01:24.001297 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4a43-account-create-update-cj4kg" event={"ID":"7de61c5d-53ba-4d26-9a79-b82c2bc3b779","Type":"ContainerStarted","Data":"b8f3077acd6da12cfe1f43474ad395781d9175a1c666f3763b8d16340af465ed"} Mar 20 16:01:24 crc kubenswrapper[4730]: I0320 16:01:24.006756 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d0d2-account-create-update-z6v46" event={"ID":"383cf79a-0636-4175-bcf8-7e369f101901","Type":"ContainerStarted","Data":"e6bf2254b89d522962de84d44ec16f569df85b053c9d15be0ca828bf3c222139"} Mar 20 16:01:24 crc kubenswrapper[4730]: I0320 16:01:24.015771 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tv4tn" event={"ID":"dac41622-7c80-4fce-a5ac-8a04d301669d","Type":"ContainerStarted","Data":"0bc2c745fdc03e2ebaee7279abc3bfbcbbdcf758f441e201c8581df04a4b242e"} Mar 20 16:01:24 crc kubenswrapper[4730]: I0320 16:01:24.033731 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qt4mz" event={"ID":"6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1","Type":"ContainerStarted","Data":"355a75fe7ba277cb711131a013e942d383afbd409506d81ea0e5c0f0045fafaa"} Mar 20 16:01:24 crc kubenswrapper[4730]: I0320 16:01:24.040459 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rlp9c" event={"ID":"475a52ba-bc8d-4c7b-ae99-330d6ec2b358","Type":"ContainerStarted","Data":"ce528fba4f5580657123f969c759921b847c901bd39a64e8b6b06d44553bfc4a"} Mar 20 16:01:24 crc kubenswrapper[4730]: I0320 16:01:24.048266 4730 scope.go:117] "RemoveContainer" containerID="c73d9cedf11e6a3a273cb136651dcbf0aa00b21555ddca3a8c2b2551a6375a21" Mar 20 16:01:24 crc kubenswrapper[4730]: I0320 16:01:24.054658 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2850-account-create-update-4lrrq" event={"ID":"3f625a9e-a940-476b-85b2-ff54c5e87785","Type":"ContainerStarted","Data":"3216fe865866f7179d8f43203a7e55e15d5d0647158c4d882b6846c9c28b49d1"} Mar 20 16:01:24 crc kubenswrapper[4730]: I0320 16:01:24.344492 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"47ed5bd7-7aa8-4f16-98de-f09e21218ae6\") " pod="openstack/glance-default-external-api-0" Mar 20 16:01:24 crc kubenswrapper[4730]: I0320 16:01:24.429011 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 16:01:24 crc kubenswrapper[4730]: W0320 16:01:24.450501 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84366eea_e5f9_43da_ac65_8e79cb659c0a.slice/crio-f66b063c9a11d8bc88c8ceb8816f91a77a1c4b1fc4469fc7bcec47c61420ab84 WatchSource:0}: Error finding container f66b063c9a11d8bc88c8ceb8816f91a77a1c4b1fc4469fc7bcec47c61420ab84: Status 404 returned error can't find the container with id f66b063c9a11d8bc88c8ceb8816f91a77a1c4b1fc4469fc7bcec47c61420ab84 Mar 20 16:01:24 crc kubenswrapper[4730]: I0320 16:01:24.509353 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 16:01:25 crc kubenswrapper[4730]: I0320 16:01:25.069355 4730 generic.go:334] "Generic (PLEG): container finished" podID="dac41622-7c80-4fce-a5ac-8a04d301669d" containerID="fa7fa1c1a12965d7d645639e89c04d923a3d343c7729667128d205eaaba9942e" exitCode=0 Mar 20 16:01:25 crc kubenswrapper[4730]: I0320 16:01:25.069646 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tv4tn" event={"ID":"dac41622-7c80-4fce-a5ac-8a04d301669d","Type":"ContainerDied","Data":"fa7fa1c1a12965d7d645639e89c04d923a3d343c7729667128d205eaaba9942e"} Mar 20 16:01:25 crc kubenswrapper[4730]: I0320 16:01:25.072546 4730 generic.go:334] "Generic (PLEG): container finished" podID="475a52ba-bc8d-4c7b-ae99-330d6ec2b358" containerID="be1153307c9e28a344ac73169445af77d8ee3c7d9c2256c03916bd83fc0e8437" exitCode=0 Mar 20 16:01:25 crc kubenswrapper[4730]: I0320 16:01:25.072591 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rlp9c" event={"ID":"475a52ba-bc8d-4c7b-ae99-330d6ec2b358","Type":"ContainerDied","Data":"be1153307c9e28a344ac73169445af77d8ee3c7d9c2256c03916bd83fc0e8437"} Mar 20 16:01:25 crc kubenswrapper[4730]: I0320 16:01:25.074922 4730 generic.go:334] "Generic (PLEG): container finished" podID="383cf79a-0636-4175-bcf8-7e369f101901" containerID="f2c396b5999dcbacb34f0cb38c776c483344cb3d6a6925954ac69d2fbac35de7" exitCode=0 Mar 20 16:01:25 crc kubenswrapper[4730]: I0320 16:01:25.075032 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d0d2-account-create-update-z6v46" event={"ID":"383cf79a-0636-4175-bcf8-7e369f101901","Type":"ContainerDied","Data":"f2c396b5999dcbacb34f0cb38c776c483344cb3d6a6925954ac69d2fbac35de7"} Mar 20 16:01:25 crc kubenswrapper[4730]: I0320 16:01:25.077068 4730 generic.go:334] "Generic (PLEG): container finished" podID="6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1" containerID="0d50b068c846deeebd08139bc4d49513ed414820309b36a4108d6ffa43871b84" exitCode=0 Mar 20 16:01:25 crc kubenswrapper[4730]: I0320 16:01:25.077143 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qt4mz" event={"ID":"6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1","Type":"ContainerDied","Data":"0d50b068c846deeebd08139bc4d49513ed414820309b36a4108d6ffa43871b84"} Mar 20 16:01:25 crc kubenswrapper[4730]: I0320 16:01:25.080227 4730 generic.go:334] "Generic (PLEG): container finished" podID="3f625a9e-a940-476b-85b2-ff54c5e87785" containerID="3a26f3f5793abd65e69907fa90ac71e2abaa4ea13397a929de033f2bbf59a251" exitCode=0 Mar 20 16:01:25 crc kubenswrapper[4730]: I0320 16:01:25.080286 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2850-account-create-update-4lrrq" event={"ID":"3f625a9e-a940-476b-85b2-ff54c5e87785","Type":"ContainerDied","Data":"3a26f3f5793abd65e69907fa90ac71e2abaa4ea13397a929de033f2bbf59a251"} Mar 20 16:01:25 crc kubenswrapper[4730]: I0320 16:01:25.098340 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" event={"ID":"b9780622-27f3-4339-8107-321feed5e25b","Type":"ContainerStarted","Data":"4fcf792521814c19293591e5848d25f6975cd1c658448c02e673fd07255574a6"} Mar 20 16:01:25 crc kubenswrapper[4730]: I0320 16:01:25.098381 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" event={"ID":"b9780622-27f3-4339-8107-321feed5e25b","Type":"ContainerStarted","Data":"d59a1350a5c641d443411375443c080b305a5acd7fdd08f8f8e72ad5b37fd568"} Mar 20 16:01:25 crc kubenswrapper[4730]: I0320 16:01:25.098791 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:25 crc kubenswrapper[4730]: I0320 16:01:25.098944 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:25 crc kubenswrapper[4730]: I0320 16:01:25.100316 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"84366eea-e5f9-43da-ac65-8e79cb659c0a","Type":"ContainerStarted","Data":"f66b063c9a11d8bc88c8ceb8816f91a77a1c4b1fc4469fc7bcec47c61420ab84"} Mar 20 16:01:25 crc kubenswrapper[4730]: I0320 16:01:25.106725 4730 generic.go:334] "Generic (PLEG): container finished" podID="7de61c5d-53ba-4d26-9a79-b82c2bc3b779" containerID="b8f3077acd6da12cfe1f43474ad395781d9175a1c666f3763b8d16340af465ed" exitCode=0 Mar 20 16:01:25 crc kubenswrapper[4730]: I0320 16:01:25.106779 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4a43-account-create-update-cj4kg" event={"ID":"7de61c5d-53ba-4d26-9a79-b82c2bc3b779","Type":"ContainerDied","Data":"b8f3077acd6da12cfe1f43474ad395781d9175a1c666f3763b8d16340af465ed"} Mar 20 16:01:25 crc kubenswrapper[4730]: I0320 16:01:25.238583 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" podStartSLOduration=9.238560098 podStartE2EDuration="9.238560098s" podCreationTimestamp="2026-03-20 16:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:01:25.219757047 +0000 UTC m=+1344.433128416" watchObservedRunningTime="2026-03-20 16:01:25.238560098 +0000 UTC m=+1344.451931467" Mar 20 16:01:25 crc kubenswrapper[4730]: I0320 16:01:25.283692 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 16:01:25 crc kubenswrapper[4730]: I0320 16:01:25.561474 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82401c9f-f5f9-4bc6-a085-c89d3632493e" path="/var/lib/kubelet/pods/82401c9f-f5f9-4bc6-a085-c89d3632493e/volumes" Mar 20 16:01:25 crc kubenswrapper[4730]: I0320 16:01:25.677833 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4a43-account-create-update-cj4kg" Mar 20 16:01:25 crc kubenswrapper[4730]: I0320 16:01:25.825147 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdzxp\" (UniqueName: \"kubernetes.io/projected/7de61c5d-53ba-4d26-9a79-b82c2bc3b779-kube-api-access-rdzxp\") pod \"7de61c5d-53ba-4d26-9a79-b82c2bc3b779\" (UID: \"7de61c5d-53ba-4d26-9a79-b82c2bc3b779\") " Mar 20 16:01:25 crc kubenswrapper[4730]: I0320 16:01:25.825318 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7de61c5d-53ba-4d26-9a79-b82c2bc3b779-operator-scripts\") pod \"7de61c5d-53ba-4d26-9a79-b82c2bc3b779\" (UID: \"7de61c5d-53ba-4d26-9a79-b82c2bc3b779\") " Mar 20 16:01:25 crc kubenswrapper[4730]: I0320 16:01:25.826513 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7de61c5d-53ba-4d26-9a79-b82c2bc3b779-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7de61c5d-53ba-4d26-9a79-b82c2bc3b779" (UID: "7de61c5d-53ba-4d26-9a79-b82c2bc3b779"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:01:25 crc kubenswrapper[4730]: I0320 16:01:25.833061 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7de61c5d-53ba-4d26-9a79-b82c2bc3b779-kube-api-access-rdzxp" (OuterVolumeSpecName: "kube-api-access-rdzxp") pod "7de61c5d-53ba-4d26-9a79-b82c2bc3b779" (UID: "7de61c5d-53ba-4d26-9a79-b82c2bc3b779"). InnerVolumeSpecName "kube-api-access-rdzxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:01:25 crc kubenswrapper[4730]: I0320 16:01:25.929484 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdzxp\" (UniqueName: \"kubernetes.io/projected/7de61c5d-53ba-4d26-9a79-b82c2bc3b779-kube-api-access-rdzxp\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:25 crc kubenswrapper[4730]: I0320 16:01:25.929513 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7de61c5d-53ba-4d26-9a79-b82c2bc3b779-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:26 crc kubenswrapper[4730]: I0320 16:01:26.118289 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"84366eea-e5f9-43da-ac65-8e79cb659c0a","Type":"ContainerStarted","Data":"779e7477c07bb8fb52d96b4fbf4853ffaaf30fb37f63820812b07120ae5f29d8"} Mar 20 16:01:26 crc kubenswrapper[4730]: I0320 16:01:26.119853 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4a43-account-create-update-cj4kg" event={"ID":"7de61c5d-53ba-4d26-9a79-b82c2bc3b779","Type":"ContainerDied","Data":"37c4808f93e82a0a4d50cb5905a0dc855da5adcf2e92087cb8a7b2e04fe7f5b5"} Mar 20 16:01:26 crc kubenswrapper[4730]: I0320 16:01:26.120180 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37c4808f93e82a0a4d50cb5905a0dc855da5adcf2e92087cb8a7b2e04fe7f5b5" Mar 20 16:01:26 crc kubenswrapper[4730]: I0320 16:01:26.120233 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4a43-account-create-update-cj4kg" Mar 20 16:01:26 crc kubenswrapper[4730]: I0320 16:01:26.121943 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"47ed5bd7-7aa8-4f16-98de-f09e21218ae6","Type":"ContainerStarted","Data":"fd58701a403de7bf0ddedef18266cb425b3027c016fae3e50be9328963693994"} Mar 20 16:01:26 crc kubenswrapper[4730]: I0320 16:01:26.671796 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d0d2-account-create-update-z6v46" Mar 20 16:01:26 crc kubenswrapper[4730]: I0320 16:01:26.865865 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/383cf79a-0636-4175-bcf8-7e369f101901-operator-scripts\") pod \"383cf79a-0636-4175-bcf8-7e369f101901\" (UID: \"383cf79a-0636-4175-bcf8-7e369f101901\") " Mar 20 16:01:26 crc kubenswrapper[4730]: I0320 16:01:26.866010 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkr7g\" (UniqueName: \"kubernetes.io/projected/383cf79a-0636-4175-bcf8-7e369f101901-kube-api-access-fkr7g\") pod \"383cf79a-0636-4175-bcf8-7e369f101901\" (UID: \"383cf79a-0636-4175-bcf8-7e369f101901\") " Mar 20 16:01:26 crc kubenswrapper[4730]: I0320 16:01:26.866567 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/383cf79a-0636-4175-bcf8-7e369f101901-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "383cf79a-0636-4175-bcf8-7e369f101901" (UID: "383cf79a-0636-4175-bcf8-7e369f101901"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:01:26 crc kubenswrapper[4730]: I0320 16:01:26.866869 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/383cf79a-0636-4175-bcf8-7e369f101901-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:26 crc kubenswrapper[4730]: I0320 16:01:26.891090 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/383cf79a-0636-4175-bcf8-7e369f101901-kube-api-access-fkr7g" (OuterVolumeSpecName: "kube-api-access-fkr7g") pod "383cf79a-0636-4175-bcf8-7e369f101901" (UID: "383cf79a-0636-4175-bcf8-7e369f101901"). InnerVolumeSpecName "kube-api-access-fkr7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:01:26 crc kubenswrapper[4730]: I0320 16:01:26.968484 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkr7g\" (UniqueName: \"kubernetes.io/projected/383cf79a-0636-4175-bcf8-7e369f101901-kube-api-access-fkr7g\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.040743 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tv4tn" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.082063 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b68m2\" (UniqueName: \"kubernetes.io/projected/dac41622-7c80-4fce-a5ac-8a04d301669d-kube-api-access-b68m2\") pod \"dac41622-7c80-4fce-a5ac-8a04d301669d\" (UID: \"dac41622-7c80-4fce-a5ac-8a04d301669d\") " Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.082643 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac41622-7c80-4fce-a5ac-8a04d301669d-operator-scripts\") pod \"dac41622-7c80-4fce-a5ac-8a04d301669d\" (UID: \"dac41622-7c80-4fce-a5ac-8a04d301669d\") " Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.083582 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dac41622-7c80-4fce-a5ac-8a04d301669d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dac41622-7c80-4fce-a5ac-8a04d301669d" (UID: "dac41622-7c80-4fce-a5ac-8a04d301669d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.098428 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dac41622-7c80-4fce-a5ac-8a04d301669d-kube-api-access-b68m2" (OuterVolumeSpecName: "kube-api-access-b68m2") pod "dac41622-7c80-4fce-a5ac-8a04d301669d" (UID: "dac41622-7c80-4fce-a5ac-8a04d301669d"). InnerVolumeSpecName "kube-api-access-b68m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.183869 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b68m2\" (UniqueName: \"kubernetes.io/projected/dac41622-7c80-4fce-a5ac-8a04d301669d-kube-api-access-b68m2\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.183905 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac41622-7c80-4fce-a5ac-8a04d301669d-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.193564 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rlp9c" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.205927 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tv4tn" event={"ID":"dac41622-7c80-4fce-a5ac-8a04d301669d","Type":"ContainerDied","Data":"0bc2c745fdc03e2ebaee7279abc3bfbcbbdcf758f441e201c8581df04a4b242e"} Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.205965 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bc2c745fdc03e2ebaee7279abc3bfbcbbdcf758f441e201c8581df04a4b242e" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.206025 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tv4tn" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.212574 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rlp9c" event={"ID":"475a52ba-bc8d-4c7b-ae99-330d6ec2b358","Type":"ContainerDied","Data":"ce528fba4f5580657123f969c759921b847c901bd39a64e8b6b06d44553bfc4a"} Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.212631 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce528fba4f5580657123f969c759921b847c901bd39a64e8b6b06d44553bfc4a" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.212708 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rlp9c" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.220234 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"84366eea-e5f9-43da-ac65-8e79cb659c0a","Type":"ContainerStarted","Data":"996f0b7f4e3a654b9192e645695b74025c495451221958d0da443f5f7189c82a"} Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.262522 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"47ed5bd7-7aa8-4f16-98de-f09e21218ae6","Type":"ContainerStarted","Data":"ce34938fef5613dae9cf7087ec34e9ba6875fd8831873c932828c58d94c6b2e6"} Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.265773 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d0d2-account-create-update-z6v46" event={"ID":"383cf79a-0636-4175-bcf8-7e369f101901","Type":"ContainerDied","Data":"e6bf2254b89d522962de84d44ec16f569df85b053c9d15be0ca828bf3c222139"} Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.265807 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6bf2254b89d522962de84d44ec16f569df85b053c9d15be0ca828bf3c222139" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.265864 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d0d2-account-create-update-z6v46" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.278691 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.278673941 podStartE2EDuration="5.278673941s" podCreationTimestamp="2026-03-20 16:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:01:27.271942771 +0000 UTC m=+1346.485314130" watchObservedRunningTime="2026-03-20 16:01:27.278673941 +0000 UTC m=+1346.492045310" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.282006 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2850-account-create-update-4lrrq" event={"ID":"3f625a9e-a940-476b-85b2-ff54c5e87785","Type":"ContainerDied","Data":"3216fe865866f7179d8f43203a7e55e15d5d0647158c4d882b6846c9c28b49d1"} Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.282046 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3216fe865866f7179d8f43203a7e55e15d5d0647158c4d882b6846c9c28b49d1" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.296840 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qt4mz" event={"ID":"6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1","Type":"ContainerDied","Data":"355a75fe7ba277cb711131a013e942d383afbd409506d81ea0e5c0f0045fafaa"} Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.296883 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="355a75fe7ba277cb711131a013e942d383afbd409506d81ea0e5c0f0045fafaa" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.299799 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2850-account-create-update-4lrrq" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.314221 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qt4mz" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.389922 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/475a52ba-bc8d-4c7b-ae99-330d6ec2b358-operator-scripts\") pod \"475a52ba-bc8d-4c7b-ae99-330d6ec2b358\" (UID: \"475a52ba-bc8d-4c7b-ae99-330d6ec2b358\") " Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.390021 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rq8d\" (UniqueName: \"kubernetes.io/projected/475a52ba-bc8d-4c7b-ae99-330d6ec2b358-kube-api-access-7rq8d\") pod \"475a52ba-bc8d-4c7b-ae99-330d6ec2b358\" (UID: \"475a52ba-bc8d-4c7b-ae99-330d6ec2b358\") " Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.390804 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/475a52ba-bc8d-4c7b-ae99-330d6ec2b358-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "475a52ba-bc8d-4c7b-ae99-330d6ec2b358" (UID: "475a52ba-bc8d-4c7b-ae99-330d6ec2b358"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.406843 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/475a52ba-bc8d-4c7b-ae99-330d6ec2b358-kube-api-access-7rq8d" (OuterVolumeSpecName: "kube-api-access-7rq8d") pod "475a52ba-bc8d-4c7b-ae99-330d6ec2b358" (UID: "475a52ba-bc8d-4c7b-ae99-330d6ec2b358"). InnerVolumeSpecName "kube-api-access-7rq8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.491362 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f625a9e-a940-476b-85b2-ff54c5e87785-operator-scripts\") pod \"3f625a9e-a940-476b-85b2-ff54c5e87785\" (UID: \"3f625a9e-a940-476b-85b2-ff54c5e87785\") " Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.491466 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1-operator-scripts\") pod \"6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1\" (UID: \"6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1\") " Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.491614 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25qhp\" (UniqueName: \"kubernetes.io/projected/3f625a9e-a940-476b-85b2-ff54c5e87785-kube-api-access-25qhp\") pod \"3f625a9e-a940-476b-85b2-ff54c5e87785\" (UID: \"3f625a9e-a940-476b-85b2-ff54c5e87785\") " Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.491641 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkb5p\" (UniqueName: \"kubernetes.io/projected/6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1-kube-api-access-fkb5p\") pod \"6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1\" (UID: \"6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1\") " Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.492043 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f625a9e-a940-476b-85b2-ff54c5e87785-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3f625a9e-a940-476b-85b2-ff54c5e87785" (UID: "3f625a9e-a940-476b-85b2-ff54c5e87785"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.492043 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1" (UID: "6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.492137 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rq8d\" (UniqueName: \"kubernetes.io/projected/475a52ba-bc8d-4c7b-ae99-330d6ec2b358-kube-api-access-7rq8d\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.492155 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/475a52ba-bc8d-4c7b-ae99-330d6ec2b358-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.502538 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1-kube-api-access-fkb5p" (OuterVolumeSpecName: "kube-api-access-fkb5p") pod "6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1" (UID: "6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1"). InnerVolumeSpecName "kube-api-access-fkb5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.502635 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f625a9e-a940-476b-85b2-ff54c5e87785-kube-api-access-25qhp" (OuterVolumeSpecName: "kube-api-access-25qhp") pod "3f625a9e-a940-476b-85b2-ff54c5e87785" (UID: "3f625a9e-a940-476b-85b2-ff54c5e87785"). InnerVolumeSpecName "kube-api-access-25qhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.593398 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f625a9e-a940-476b-85b2-ff54c5e87785-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.593433 4730 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.593444 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25qhp\" (UniqueName: \"kubernetes.io/projected/3f625a9e-a940-476b-85b2-ff54c5e87785-kube-api-access-25qhp\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.593454 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkb5p\" (UniqueName: \"kubernetes.io/projected/6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1-kube-api-access-fkb5p\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:27 crc kubenswrapper[4730]: I0320 16:01:27.988432 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6858c8d8f6-k4smz" Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.102239 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ed9fad7-284f-40b4-9c3b-7a213aff010a-config\") pod \"4ed9fad7-284f-40b4-9c3b-7a213aff010a\" (UID: \"4ed9fad7-284f-40b4-9c3b-7a213aff010a\") " Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.102323 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ed9fad7-284f-40b4-9c3b-7a213aff010a-ovndb-tls-certs\") pod \"4ed9fad7-284f-40b4-9c3b-7a213aff010a\" (UID: \"4ed9fad7-284f-40b4-9c3b-7a213aff010a\") " Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.102388 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97ld6\" (UniqueName: \"kubernetes.io/projected/4ed9fad7-284f-40b4-9c3b-7a213aff010a-kube-api-access-97ld6\") pod \"4ed9fad7-284f-40b4-9c3b-7a213aff010a\" (UID: \"4ed9fad7-284f-40b4-9c3b-7a213aff010a\") " Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.102513 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4ed9fad7-284f-40b4-9c3b-7a213aff010a-httpd-config\") pod \"4ed9fad7-284f-40b4-9c3b-7a213aff010a\" (UID: \"4ed9fad7-284f-40b4-9c3b-7a213aff010a\") " Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.102599 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ed9fad7-284f-40b4-9c3b-7a213aff010a-combined-ca-bundle\") pod \"4ed9fad7-284f-40b4-9c3b-7a213aff010a\" (UID: \"4ed9fad7-284f-40b4-9c3b-7a213aff010a\") " Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.107570 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ed9fad7-284f-40b4-9c3b-7a213aff010a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "4ed9fad7-284f-40b4-9c3b-7a213aff010a" (UID: "4ed9fad7-284f-40b4-9c3b-7a213aff010a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.109987 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ed9fad7-284f-40b4-9c3b-7a213aff010a-kube-api-access-97ld6" (OuterVolumeSpecName: "kube-api-access-97ld6") pod "4ed9fad7-284f-40b4-9c3b-7a213aff010a" (UID: "4ed9fad7-284f-40b4-9c3b-7a213aff010a"). InnerVolumeSpecName "kube-api-access-97ld6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.152557 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ed9fad7-284f-40b4-9c3b-7a213aff010a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ed9fad7-284f-40b4-9c3b-7a213aff010a" (UID: "4ed9fad7-284f-40b4-9c3b-7a213aff010a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.162524 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ed9fad7-284f-40b4-9c3b-7a213aff010a-config" (OuterVolumeSpecName: "config") pod "4ed9fad7-284f-40b4-9c3b-7a213aff010a" (UID: "4ed9fad7-284f-40b4-9c3b-7a213aff010a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.183239 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ed9fad7-284f-40b4-9c3b-7a213aff010a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "4ed9fad7-284f-40b4-9c3b-7a213aff010a" (UID: "4ed9fad7-284f-40b4-9c3b-7a213aff010a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.206419 4730 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4ed9fad7-284f-40b4-9c3b-7a213aff010a-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.206457 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ed9fad7-284f-40b4-9c3b-7a213aff010a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.206469 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ed9fad7-284f-40b4-9c3b-7a213aff010a-config\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.206477 4730 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ed9fad7-284f-40b4-9c3b-7a213aff010a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.206486 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97ld6\" (UniqueName: \"kubernetes.io/projected/4ed9fad7-284f-40b4-9c3b-7a213aff010a-kube-api-access-97ld6\") on node \"crc\" DevicePath \"\"" Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.310530 4730 generic.go:334] "Generic (PLEG): container finished" podID="4ed9fad7-284f-40b4-9c3b-7a213aff010a" containerID="2eff1617c29a34da6021d776b5bc5c6695025819add4af253986837526af0f15" exitCode=0 Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.310675 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6858c8d8f6-k4smz" Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.310875 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6858c8d8f6-k4smz" event={"ID":"4ed9fad7-284f-40b4-9c3b-7a213aff010a","Type":"ContainerDied","Data":"2eff1617c29a34da6021d776b5bc5c6695025819add4af253986837526af0f15"} Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.310921 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6858c8d8f6-k4smz" event={"ID":"4ed9fad7-284f-40b4-9c3b-7a213aff010a","Type":"ContainerDied","Data":"52473fa5ebf9585a5de42b7ba1a1ed907405640eabed30fdba7a035924a392d0"} Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.310941 4730 scope.go:117] "RemoveContainer" containerID="3635b09696560454a28e6d666babdb61696ccff059aecec39acea6122546c8aa" Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.313040 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qt4mz" Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.314449 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"47ed5bd7-7aa8-4f16-98de-f09e21218ae6","Type":"ContainerStarted","Data":"ff63c9353e2d509add01dd682087a5bf19343746c25a249896e86f0c4fee5817"} Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.315045 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2850-account-create-update-4lrrq" Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.340712 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.340690172 podStartE2EDuration="5.340690172s" podCreationTimestamp="2026-03-20 16:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:01:28.335338582 +0000 UTC m=+1347.548709951" watchObservedRunningTime="2026-03-20 16:01:28.340690172 +0000 UTC m=+1347.554061541" Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.350221 4730 scope.go:117] "RemoveContainer" containerID="2eff1617c29a34da6021d776b5bc5c6695025819add4af253986837526af0f15" Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.372895 4730 scope.go:117] "RemoveContainer" containerID="3635b09696560454a28e6d666babdb61696ccff059aecec39acea6122546c8aa" Mar 20 16:01:28 crc kubenswrapper[4730]: E0320 16:01:28.373347 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3635b09696560454a28e6d666babdb61696ccff059aecec39acea6122546c8aa\": container with ID starting with 3635b09696560454a28e6d666babdb61696ccff059aecec39acea6122546c8aa not found: ID does not exist" containerID="3635b09696560454a28e6d666babdb61696ccff059aecec39acea6122546c8aa" Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.373393 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3635b09696560454a28e6d666babdb61696ccff059aecec39acea6122546c8aa"} err="failed to get container status \"3635b09696560454a28e6d666babdb61696ccff059aecec39acea6122546c8aa\": rpc error: code = NotFound desc = could not find container \"3635b09696560454a28e6d666babdb61696ccff059aecec39acea6122546c8aa\": container with ID starting with 3635b09696560454a28e6d666babdb61696ccff059aecec39acea6122546c8aa not found: ID does not exist" Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.373422 4730 scope.go:117] "RemoveContainer" containerID="2eff1617c29a34da6021d776b5bc5c6695025819add4af253986837526af0f15" Mar 20 16:01:28 crc kubenswrapper[4730]: E0320 16:01:28.373693 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eff1617c29a34da6021d776b5bc5c6695025819add4af253986837526af0f15\": container with ID starting with 2eff1617c29a34da6021d776b5bc5c6695025819add4af253986837526af0f15 not found: ID does not exist" containerID="2eff1617c29a34da6021d776b5bc5c6695025819add4af253986837526af0f15" Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.373729 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eff1617c29a34da6021d776b5bc5c6695025819add4af253986837526af0f15"} err="failed to get container status \"2eff1617c29a34da6021d776b5bc5c6695025819add4af253986837526af0f15\": rpc error: code = NotFound desc = could not find container \"2eff1617c29a34da6021d776b5bc5c6695025819add4af253986837526af0f15\": container with ID starting with 2eff1617c29a34da6021d776b5bc5c6695025819add4af253986837526af0f15 not found: ID does not exist" Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.376912 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6858c8d8f6-k4smz"] Mar 20 16:01:28 crc kubenswrapper[4730]: I0320 16:01:28.396339 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6858c8d8f6-k4smz"] Mar 20 16:01:29 crc kubenswrapper[4730]: I0320 16:01:29.544154 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ed9fad7-284f-40b4-9c3b-7a213aff010a" path="/var/lib/kubelet/pods/4ed9fad7-284f-40b4-9c3b-7a213aff010a/volumes" Mar 20 16:01:29 crc kubenswrapper[4730]: I0320 16:01:29.974442 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 20 16:01:29 crc kubenswrapper[4730]: I0320 16:01:29.974517 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Mar 20 16:01:29 crc kubenswrapper[4730]: I0320 16:01:29.975324 4730 scope.go:117] "RemoveContainer" containerID="ca3ac5b513d25322badcc2bf19b245d687c9ccf8bff6c35cf5794c95ec2ab964" Mar 20 16:01:29 crc kubenswrapper[4730]: E0320 16:01:29.975711 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(3f6c808e-d523-48bd-8ec2-28b625834317)\"" pod="openstack/watcher-decision-engine-0" podUID="3f6c808e-d523-48bd-8ec2-28b625834317" Mar 20 16:01:31 crc kubenswrapper[4730]: I0320 16:01:31.985575 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qsrbd"] Mar 20 16:01:31 crc kubenswrapper[4730]: E0320 16:01:31.987485 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7de61c5d-53ba-4d26-9a79-b82c2bc3b779" containerName="mariadb-account-create-update" Mar 20 16:01:31 crc kubenswrapper[4730]: I0320 16:01:31.987599 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="7de61c5d-53ba-4d26-9a79-b82c2bc3b779" containerName="mariadb-account-create-update" Mar 20 16:01:31 crc kubenswrapper[4730]: E0320 16:01:31.987676 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383cf79a-0636-4175-bcf8-7e369f101901" containerName="mariadb-account-create-update" Mar 20 16:01:31 crc kubenswrapper[4730]: I0320 16:01:31.987742 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="383cf79a-0636-4175-bcf8-7e369f101901" containerName="mariadb-account-create-update" Mar 20 16:01:31 crc kubenswrapper[4730]: E0320 16:01:31.987804 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1" containerName="mariadb-database-create" Mar 20 16:01:31 crc kubenswrapper[4730]: I0320 16:01:31.987863 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1" containerName="mariadb-database-create" Mar 20 16:01:31 crc kubenswrapper[4730]: E0320 16:01:31.987931 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f625a9e-a940-476b-85b2-ff54c5e87785" containerName="mariadb-account-create-update" Mar 20 16:01:31 crc kubenswrapper[4730]: I0320 16:01:31.988002 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f625a9e-a940-476b-85b2-ff54c5e87785" containerName="mariadb-account-create-update" Mar 20 16:01:31 crc kubenswrapper[4730]: E0320 16:01:31.988092 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ed9fad7-284f-40b4-9c3b-7a213aff010a" containerName="neutron-api" Mar 20 16:01:31 crc kubenswrapper[4730]: I0320 16:01:31.988162 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ed9fad7-284f-40b4-9c3b-7a213aff010a" containerName="neutron-api" Mar 20 16:01:31 crc kubenswrapper[4730]: E0320 16:01:31.988275 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dac41622-7c80-4fce-a5ac-8a04d301669d" containerName="mariadb-database-create" Mar 20 16:01:31 crc kubenswrapper[4730]: I0320 16:01:31.988367 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="dac41622-7c80-4fce-a5ac-8a04d301669d" containerName="mariadb-database-create" Mar 20 16:01:31 crc kubenswrapper[4730]: E0320 16:01:31.988457 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ed9fad7-284f-40b4-9c3b-7a213aff010a" containerName="neutron-httpd" Mar 20 16:01:31 crc kubenswrapper[4730]: I0320 16:01:31.988540 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ed9fad7-284f-40b4-9c3b-7a213aff010a" containerName="neutron-httpd" Mar 20 16:01:31 crc kubenswrapper[4730]: E0320 16:01:31.988728 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="475a52ba-bc8d-4c7b-ae99-330d6ec2b358" containerName="mariadb-database-create" Mar 20 16:01:31 crc kubenswrapper[4730]: I0320 16:01:31.988810 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="475a52ba-bc8d-4c7b-ae99-330d6ec2b358" containerName="mariadb-database-create" Mar 20 16:01:31 crc kubenswrapper[4730]: I0320 16:01:31.989085 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1" containerName="mariadb-database-create" Mar 20 16:01:31 crc kubenswrapper[4730]: I0320 16:01:31.989179 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="7de61c5d-53ba-4d26-9a79-b82c2bc3b779" containerName="mariadb-account-create-update" Mar 20 16:01:31 crc kubenswrapper[4730]: I0320 16:01:31.989279 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="383cf79a-0636-4175-bcf8-7e369f101901" containerName="mariadb-account-create-update" Mar 20 16:01:31 crc kubenswrapper[4730]: I0320 16:01:31.989358 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ed9fad7-284f-40b4-9c3b-7a213aff010a" containerName="neutron-httpd" Mar 20 16:01:31 crc kubenswrapper[4730]: I0320 16:01:31.989448 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ed9fad7-284f-40b4-9c3b-7a213aff010a" containerName="neutron-api" Mar 20 16:01:31 crc kubenswrapper[4730]: I0320 16:01:31.989516 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="475a52ba-bc8d-4c7b-ae99-330d6ec2b358" containerName="mariadb-database-create" Mar 20 16:01:31 crc kubenswrapper[4730]: I0320 16:01:31.989587 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="dac41622-7c80-4fce-a5ac-8a04d301669d" containerName="mariadb-database-create" Mar 20 16:01:31 crc kubenswrapper[4730]: I0320 16:01:31.989658 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f625a9e-a940-476b-85b2-ff54c5e87785" containerName="mariadb-account-create-update" Mar 20 16:01:31 crc kubenswrapper[4730]: I0320 16:01:31.990574 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qsrbd" Mar 20 16:01:31 crc kubenswrapper[4730]: I0320 16:01:31.993631 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 20 16:01:31 crc kubenswrapper[4730]: I0320 16:01:31.993952 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fmsk9" Mar 20 16:01:31 crc kubenswrapper[4730]: I0320 16:01:31.994280 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 20 16:01:32 crc kubenswrapper[4730]: I0320 16:01:32.004124 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qsrbd"] Mar 20 16:01:32 crc kubenswrapper[4730]: I0320 16:01:32.072807 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/279d2368-abe1-465a-9007-68542e5dbfc4-scripts\") pod \"nova-cell0-conductor-db-sync-qsrbd\" (UID: \"279d2368-abe1-465a-9007-68542e5dbfc4\") " pod="openstack/nova-cell0-conductor-db-sync-qsrbd" Mar 20 16:01:32 crc kubenswrapper[4730]: I0320 16:01:32.072959 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/279d2368-abe1-465a-9007-68542e5dbfc4-config-data\") pod \"nova-cell0-conductor-db-sync-qsrbd\" (UID: \"279d2368-abe1-465a-9007-68542e5dbfc4\") " pod="openstack/nova-cell0-conductor-db-sync-qsrbd" Mar 20 16:01:32 crc kubenswrapper[4730]: I0320 16:01:32.073000 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8swbj\" (UniqueName: \"kubernetes.io/projected/279d2368-abe1-465a-9007-68542e5dbfc4-kube-api-access-8swbj\") pod \"nova-cell0-conductor-db-sync-qsrbd\" (UID: \"279d2368-abe1-465a-9007-68542e5dbfc4\") " pod="openstack/nova-cell0-conductor-db-sync-qsrbd" Mar 20 16:01:32 crc kubenswrapper[4730]: I0320 16:01:32.073045 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/279d2368-abe1-465a-9007-68542e5dbfc4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qsrbd\" (UID: \"279d2368-abe1-465a-9007-68542e5dbfc4\") " pod="openstack/nova-cell0-conductor-db-sync-qsrbd" Mar 20 16:01:32 crc kubenswrapper[4730]: I0320 16:01:32.174628 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/279d2368-abe1-465a-9007-68542e5dbfc4-scripts\") pod \"nova-cell0-conductor-db-sync-qsrbd\" (UID: \"279d2368-abe1-465a-9007-68542e5dbfc4\") " pod="openstack/nova-cell0-conductor-db-sync-qsrbd" Mar 20 16:01:32 crc kubenswrapper[4730]: I0320 16:01:32.174882 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/279d2368-abe1-465a-9007-68542e5dbfc4-config-data\") pod \"nova-cell0-conductor-db-sync-qsrbd\" (UID: \"279d2368-abe1-465a-9007-68542e5dbfc4\") " pod="openstack/nova-cell0-conductor-db-sync-qsrbd" Mar 20 16:01:32 crc kubenswrapper[4730]: I0320 16:01:32.174971 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8swbj\" (UniqueName: \"kubernetes.io/projected/279d2368-abe1-465a-9007-68542e5dbfc4-kube-api-access-8swbj\") pod \"nova-cell0-conductor-db-sync-qsrbd\" (UID: \"279d2368-abe1-465a-9007-68542e5dbfc4\") " pod="openstack/nova-cell0-conductor-db-sync-qsrbd" Mar 20 16:01:32 crc kubenswrapper[4730]: I0320 16:01:32.175036 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/279d2368-abe1-465a-9007-68542e5dbfc4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qsrbd\" (UID: \"279d2368-abe1-465a-9007-68542e5dbfc4\") " pod="openstack/nova-cell0-conductor-db-sync-qsrbd" Mar 20 16:01:32 crc kubenswrapper[4730]: I0320 16:01:32.181406 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/279d2368-abe1-465a-9007-68542e5dbfc4-scripts\") pod \"nova-cell0-conductor-db-sync-qsrbd\" (UID: \"279d2368-abe1-465a-9007-68542e5dbfc4\") " pod="openstack/nova-cell0-conductor-db-sync-qsrbd" Mar 20 16:01:32 crc kubenswrapper[4730]: I0320 16:01:32.182604 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/279d2368-abe1-465a-9007-68542e5dbfc4-config-data\") pod \"nova-cell0-conductor-db-sync-qsrbd\" (UID: \"279d2368-abe1-465a-9007-68542e5dbfc4\") " pod="openstack/nova-cell0-conductor-db-sync-qsrbd" Mar 20 16:01:32 crc kubenswrapper[4730]: I0320 16:01:32.185753 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/279d2368-abe1-465a-9007-68542e5dbfc4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qsrbd\" (UID: \"279d2368-abe1-465a-9007-68542e5dbfc4\") " pod="openstack/nova-cell0-conductor-db-sync-qsrbd" Mar 20 16:01:32 crc kubenswrapper[4730]: I0320 16:01:32.194620 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8swbj\" (UniqueName: \"kubernetes.io/projected/279d2368-abe1-465a-9007-68542e5dbfc4-kube-api-access-8swbj\") pod \"nova-cell0-conductor-db-sync-qsrbd\" (UID: \"279d2368-abe1-465a-9007-68542e5dbfc4\") " pod="openstack/nova-cell0-conductor-db-sync-qsrbd" Mar 20 16:01:32 crc kubenswrapper[4730]: I0320 16:01:32.276776 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:32 crc kubenswrapper[4730]: I0320 16:01:32.277849 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" Mar 20 16:01:32 crc kubenswrapper[4730]: I0320 16:01:32.307328 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qsrbd" Mar 20 16:01:32 crc kubenswrapper[4730]: I0320 16:01:32.780290 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qsrbd"] Mar 20 16:01:33 crc kubenswrapper[4730]: I0320 16:01:33.377085 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qsrbd" event={"ID":"279d2368-abe1-465a-9007-68542e5dbfc4","Type":"ContainerStarted","Data":"da2c214fcdb33fae608237a0fb1d01559481f0b67a4afa1e6c930298a64b75a2"} Mar 20 16:01:33 crc kubenswrapper[4730]: I0320 16:01:33.517750 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 20 16:01:33 crc kubenswrapper[4730]: I0320 16:01:33.518147 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 20 16:01:33 crc kubenswrapper[4730]: I0320 16:01:33.557560 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 20 16:01:33 crc kubenswrapper[4730]: I0320 16:01:33.588656 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 20 16:01:34 crc kubenswrapper[4730]: I0320 16:01:34.390025 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 20 16:01:34 crc kubenswrapper[4730]: I0320 16:01:34.390069 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 20 16:01:34 crc kubenswrapper[4730]: I0320 16:01:34.510558 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 20 16:01:34 crc kubenswrapper[4730]: I0320 16:01:34.510623 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 20 16:01:34 crc kubenswrapper[4730]: I0320 16:01:34.545883 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 20 16:01:34 crc kubenswrapper[4730]: I0320 16:01:34.567567 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 20 16:01:35 crc kubenswrapper[4730]: I0320 16:01:35.399072 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 20 16:01:35 crc kubenswrapper[4730]: I0320 16:01:35.399108 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 20 16:01:36 crc kubenswrapper[4730]: I0320 16:01:36.544574 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 20 16:01:36 crc kubenswrapper[4730]: I0320 16:01:36.544675 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 16:01:36 crc kubenswrapper[4730]: I0320 16:01:36.904282 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 20 16:01:37 crc kubenswrapper[4730]: I0320 16:01:37.825386 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 20 16:01:37 crc kubenswrapper[4730]: I0320 16:01:37.825719 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 16:01:38 crc kubenswrapper[4730]: I0320 16:01:38.078909 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 20 16:01:42 crc kubenswrapper[4730]: I0320 16:01:42.879746 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:01:42 crc kubenswrapper[4730]: I0320 16:01:42.880378 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:01:42 crc kubenswrapper[4730]: I0320 16:01:42.880440 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 16:01:42 crc kubenswrapper[4730]: I0320 16:01:42.881467 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2aab75ddb2e10e731a7d582f69fae06a40e7e5a6270ff47496bdac5fb9c6ebfd"} pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 16:01:42 crc kubenswrapper[4730]: I0320 16:01:42.881648 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" containerID="cri-o://2aab75ddb2e10e731a7d582f69fae06a40e7e5a6270ff47496bdac5fb9c6ebfd" gracePeriod=600 Mar 20 16:01:43 crc kubenswrapper[4730]: I0320 16:01:43.495208 4730 generic.go:334] "Generic (PLEG): container finished" podID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerID="2aab75ddb2e10e731a7d582f69fae06a40e7e5a6270ff47496bdac5fb9c6ebfd" exitCode=0 Mar 20 16:01:43 crc kubenswrapper[4730]: I0320 16:01:43.495288 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerDied","Data":"2aab75ddb2e10e731a7d582f69fae06a40e7e5a6270ff47496bdac5fb9c6ebfd"} Mar 20 16:01:43 crc kubenswrapper[4730]: I0320 16:01:43.495705 4730 scope.go:117] "RemoveContainer" containerID="5a28eadd1ac2eb334876364a020c16296d471cf45645c126a154825ac93c80d5" Mar 20 16:01:44 crc kubenswrapper[4730]: I0320 16:01:44.507565 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qsrbd" event={"ID":"279d2368-abe1-465a-9007-68542e5dbfc4","Type":"ContainerStarted","Data":"79f3caf37f32c7308d415a20940a3f1cbb774116c3657a269e41ea28bde4ad32"} Mar 20 16:01:44 crc kubenswrapper[4730]: I0320 16:01:44.510362 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerStarted","Data":"fb7cef3383bd559653e29a00e754a8c3366946a9ed7a655b7b70a7214aec8143"} Mar 20 16:01:44 crc kubenswrapper[4730]: I0320 16:01:44.527065 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-qsrbd" podStartSLOduration=2.809549272 podStartE2EDuration="13.527042622s" podCreationTimestamp="2026-03-20 16:01:31 +0000 UTC" firstStartedPulling="2026-03-20 16:01:32.787695838 +0000 UTC m=+1352.001067207" lastFinishedPulling="2026-03-20 16:01:43.505189188 +0000 UTC m=+1362.718560557" observedRunningTime="2026-03-20 16:01:44.526099635 +0000 UTC m=+1363.739471004" watchObservedRunningTime="2026-03-20 16:01:44.527042622 +0000 UTC m=+1363.740414031" Mar 20 16:01:44 crc kubenswrapper[4730]: I0320 16:01:44.533310 4730 scope.go:117] "RemoveContainer" containerID="ca3ac5b513d25322badcc2bf19b245d687c9ccf8bff6c35cf5794c95ec2ab964" Mar 20 16:01:44 crc kubenswrapper[4730]: E0320 16:01:44.533886 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(3f6c808e-d523-48bd-8ec2-28b625834317)\"" pod="openstack/watcher-decision-engine-0" podUID="3f6c808e-d523-48bd-8ec2-28b625834317" Mar 20 16:01:52 crc kubenswrapper[4730]: I0320 16:01:52.346962 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="fe109bf0-70d2-41d2-855c-6eb862e568b6" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.183:3000/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 16:01:53 crc kubenswrapper[4730]: I0320 16:01:53.629003 4730 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podfe109bf0-70d2-41d2-855c-6eb862e568b6"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podfe109bf0-70d2-41d2-855c-6eb862e568b6] : Timed out while waiting for systemd to remove kubepods-besteffort-podfe109bf0_70d2_41d2_855c_6eb862e568b6.slice" Mar 20 16:01:53 crc kubenswrapper[4730]: E0320 16:01:53.629562 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podfe109bf0-70d2-41d2-855c-6eb862e568b6] : unable to destroy cgroup paths for cgroup [kubepods besteffort podfe109bf0-70d2-41d2-855c-6eb862e568b6] : Timed out while waiting for systemd to remove kubepods-besteffort-podfe109bf0_70d2_41d2_855c_6eb862e568b6.slice" pod="openstack/ceilometer-0" podUID="fe109bf0-70d2-41d2-855c-6eb862e568b6" Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.627738 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.690697 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.703393 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.709473 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.712188 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.720484 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.720743 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.724852 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.789883 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03f19a6-9d2e-421e-9388-68ae49ae68ef-log-httpd\") pod \"ceilometer-0\" (UID: \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\") " pod="openstack/ceilometer-0" Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.790025 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f03f19a6-9d2e-421e-9388-68ae49ae68ef-config-data\") pod \"ceilometer-0\" (UID: \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\") " pod="openstack/ceilometer-0" Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.790063 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f03f19a6-9d2e-421e-9388-68ae49ae68ef-scripts\") pod \"ceilometer-0\" (UID: \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\") " pod="openstack/ceilometer-0" Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.790165 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2lkw\" (UniqueName: \"kubernetes.io/projected/f03f19a6-9d2e-421e-9388-68ae49ae68ef-kube-api-access-w2lkw\") pod \"ceilometer-0\" (UID: \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\") " pod="openstack/ceilometer-0" Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.790323 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f03f19a6-9d2e-421e-9388-68ae49ae68ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\") " pod="openstack/ceilometer-0" Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.790373 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03f19a6-9d2e-421e-9388-68ae49ae68ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\") " pod="openstack/ceilometer-0" Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.790426 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03f19a6-9d2e-421e-9388-68ae49ae68ef-run-httpd\") pod \"ceilometer-0\" (UID: \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\") " pod="openstack/ceilometer-0" Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.899949 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f03f19a6-9d2e-421e-9388-68ae49ae68ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\") " pod="openstack/ceilometer-0" Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.900024 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03f19a6-9d2e-421e-9388-68ae49ae68ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\") " pod="openstack/ceilometer-0" Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.900080 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03f19a6-9d2e-421e-9388-68ae49ae68ef-run-httpd\") pod \"ceilometer-0\" (UID: \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\") " pod="openstack/ceilometer-0" Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.900214 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03f19a6-9d2e-421e-9388-68ae49ae68ef-log-httpd\") pod \"ceilometer-0\" (UID: \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\") " pod="openstack/ceilometer-0" Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.900330 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f03f19a6-9d2e-421e-9388-68ae49ae68ef-config-data\") pod \"ceilometer-0\" (UID: \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\") " pod="openstack/ceilometer-0" Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.900366 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f03f19a6-9d2e-421e-9388-68ae49ae68ef-scripts\") pod \"ceilometer-0\" (UID: \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\") " pod="openstack/ceilometer-0" Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.900452 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2lkw\" (UniqueName: \"kubernetes.io/projected/f03f19a6-9d2e-421e-9388-68ae49ae68ef-kube-api-access-w2lkw\") pod \"ceilometer-0\" (UID: \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\") " pod="openstack/ceilometer-0" Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.901473 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03f19a6-9d2e-421e-9388-68ae49ae68ef-run-httpd\") pod \"ceilometer-0\" (UID: \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\") " pod="openstack/ceilometer-0" Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.901535 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03f19a6-9d2e-421e-9388-68ae49ae68ef-log-httpd\") pod \"ceilometer-0\" (UID: \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\") " pod="openstack/ceilometer-0" Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.909545 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f03f19a6-9d2e-421e-9388-68ae49ae68ef-config-data\") pod \"ceilometer-0\" (UID: \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\") " pod="openstack/ceilometer-0" Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.913134 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03f19a6-9d2e-421e-9388-68ae49ae68ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\") " pod="openstack/ceilometer-0" Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.914729 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f03f19a6-9d2e-421e-9388-68ae49ae68ef-scripts\") pod \"ceilometer-0\" (UID: \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\") " pod="openstack/ceilometer-0" Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.916336 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f03f19a6-9d2e-421e-9388-68ae49ae68ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\") " pod="openstack/ceilometer-0" Mar 20 16:01:54 crc kubenswrapper[4730]: I0320 16:01:54.929020 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2lkw\" (UniqueName: \"kubernetes.io/projected/f03f19a6-9d2e-421e-9388-68ae49ae68ef-kube-api-access-w2lkw\") pod \"ceilometer-0\" (UID: \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\") " pod="openstack/ceilometer-0" Mar 20 16:01:55 crc kubenswrapper[4730]: I0320 16:01:55.051683 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 16:01:55 crc kubenswrapper[4730]: I0320 16:01:55.533306 4730 scope.go:117] "RemoveContainer" containerID="ca3ac5b513d25322badcc2bf19b245d687c9ccf8bff6c35cf5794c95ec2ab964" Mar 20 16:01:55 crc kubenswrapper[4730]: E0320 16:01:55.533997 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(3f6c808e-d523-48bd-8ec2-28b625834317)\"" pod="openstack/watcher-decision-engine-0" podUID="3f6c808e-d523-48bd-8ec2-28b625834317" Mar 20 16:01:55 crc kubenswrapper[4730]: I0320 16:01:55.548536 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe109bf0-70d2-41d2-855c-6eb862e568b6" path="/var/lib/kubelet/pods/fe109bf0-70d2-41d2-855c-6eb862e568b6/volumes" Mar 20 16:01:55 crc kubenswrapper[4730]: W0320 16:01:55.561339 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf03f19a6_9d2e_421e_9388_68ae49ae68ef.slice/crio-d3a4ea5bd7c7ce048ff6487734956147a2553ddcde8a7a07f3bf8e3324ca2fc8 WatchSource:0}: Error finding container d3a4ea5bd7c7ce048ff6487734956147a2553ddcde8a7a07f3bf8e3324ca2fc8: Status 404 returned error can't find the container with id d3a4ea5bd7c7ce048ff6487734956147a2553ddcde8a7a07f3bf8e3324ca2fc8 Mar 20 16:01:55 crc kubenswrapper[4730]: I0320 16:01:55.561350 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:01:55 crc kubenswrapper[4730]: I0320 16:01:55.642924 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f03f19a6-9d2e-421e-9388-68ae49ae68ef","Type":"ContainerStarted","Data":"d3a4ea5bd7c7ce048ff6487734956147a2553ddcde8a7a07f3bf8e3324ca2fc8"} Mar 20 16:01:56 crc kubenswrapper[4730]: I0320 16:01:56.652452 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f03f19a6-9d2e-421e-9388-68ae49ae68ef","Type":"ContainerStarted","Data":"f0057f5dab402a46d249f303cdcb727d64def9624f515e382c4084ea494f079a"} Mar 20 16:01:56 crc kubenswrapper[4730]: I0320 16:01:56.652851 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f03f19a6-9d2e-421e-9388-68ae49ae68ef","Type":"ContainerStarted","Data":"9dd9d5de632bf89de261876ec488aa9f35efb63310645d30071a551b02d1b18c"} Mar 20 16:01:58 crc kubenswrapper[4730]: I0320 16:01:58.680060 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f03f19a6-9d2e-421e-9388-68ae49ae68ef","Type":"ContainerStarted","Data":"fcfdc390791242aab10df223e6f66346cede319e34a43195f06b469deaa0cf18"} Mar 20 16:01:59 crc kubenswrapper[4730]: I0320 16:01:59.690129 4730 generic.go:334] "Generic (PLEG): container finished" podID="279d2368-abe1-465a-9007-68542e5dbfc4" containerID="79f3caf37f32c7308d415a20940a3f1cbb774116c3657a269e41ea28bde4ad32" exitCode=0 Mar 20 16:01:59 crc kubenswrapper[4730]: I0320 16:01:59.690205 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qsrbd" event={"ID":"279d2368-abe1-465a-9007-68542e5dbfc4","Type":"ContainerDied","Data":"79f3caf37f32c7308d415a20940a3f1cbb774116c3657a269e41ea28bde4ad32"} Mar 20 16:02:00 crc kubenswrapper[4730]: I0320 16:02:00.140268 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567042-rngpc"] Mar 20 16:02:00 crc kubenswrapper[4730]: I0320 16:02:00.141992 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567042-rngpc" Mar 20 16:02:00 crc kubenswrapper[4730]: I0320 16:02:00.144291 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:02:00 crc kubenswrapper[4730]: I0320 16:02:00.144511 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:02:00 crc kubenswrapper[4730]: I0320 16:02:00.144682 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:02:00 crc kubenswrapper[4730]: I0320 16:02:00.156563 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567042-rngpc"] Mar 20 16:02:00 crc kubenswrapper[4730]: I0320 16:02:00.304161 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2grlf\" (UniqueName: \"kubernetes.io/projected/8f1785c3-01b9-48cd-bfc9-c0fdb1c18455-kube-api-access-2grlf\") pod \"auto-csr-approver-29567042-rngpc\" (UID: \"8f1785c3-01b9-48cd-bfc9-c0fdb1c18455\") " pod="openshift-infra/auto-csr-approver-29567042-rngpc" Mar 20 16:02:00 crc kubenswrapper[4730]: I0320 16:02:00.405708 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2grlf\" (UniqueName: \"kubernetes.io/projected/8f1785c3-01b9-48cd-bfc9-c0fdb1c18455-kube-api-access-2grlf\") pod \"auto-csr-approver-29567042-rngpc\" (UID: \"8f1785c3-01b9-48cd-bfc9-c0fdb1c18455\") " pod="openshift-infra/auto-csr-approver-29567042-rngpc" Mar 20 16:02:00 crc kubenswrapper[4730]: I0320 16:02:00.441414 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2grlf\" (UniqueName: \"kubernetes.io/projected/8f1785c3-01b9-48cd-bfc9-c0fdb1c18455-kube-api-access-2grlf\") pod \"auto-csr-approver-29567042-rngpc\" (UID: \"8f1785c3-01b9-48cd-bfc9-c0fdb1c18455\") " pod="openshift-infra/auto-csr-approver-29567042-rngpc" Mar 20 16:02:00 crc kubenswrapper[4730]: I0320 16:02:00.461022 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567042-rngpc" Mar 20 16:02:00 crc kubenswrapper[4730]: I0320 16:02:00.708351 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f03f19a6-9d2e-421e-9388-68ae49ae68ef","Type":"ContainerStarted","Data":"3cdc313706dd8cba3d3a54bac2bfd0a19fc778679c99983f7df6459d6ad53f25"} Mar 20 16:02:00 crc kubenswrapper[4730]: I0320 16:02:00.708683 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 16:02:00 crc kubenswrapper[4730]: I0320 16:02:00.740306 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.387666184 podStartE2EDuration="6.74027849s" podCreationTimestamp="2026-03-20 16:01:54 +0000 UTC" firstStartedPulling="2026-03-20 16:01:55.564375649 +0000 UTC m=+1374.777747018" lastFinishedPulling="2026-03-20 16:01:59.916987955 +0000 UTC m=+1379.130359324" observedRunningTime="2026-03-20 16:02:00.734078302 +0000 UTC m=+1379.947449671" watchObservedRunningTime="2026-03-20 16:02:00.74027849 +0000 UTC m=+1379.953649859" Mar 20 16:02:00 crc kubenswrapper[4730]: I0320 16:02:00.981185 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567042-rngpc"] Mar 20 16:02:00 crc kubenswrapper[4730]: W0320 16:02:00.985163 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f1785c3_01b9_48cd_bfc9_c0fdb1c18455.slice/crio-0195877c2e8f64939a13b8bd295cc860249fcf82aef822acc7e477581c695355 WatchSource:0}: Error finding container 0195877c2e8f64939a13b8bd295cc860249fcf82aef822acc7e477581c695355: Status 404 returned error can't find the container with id 0195877c2e8f64939a13b8bd295cc860249fcf82aef822acc7e477581c695355 Mar 20 16:02:01 crc kubenswrapper[4730]: I0320 16:02:01.074995 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qsrbd" Mar 20 16:02:01 crc kubenswrapper[4730]: I0320 16:02:01.223784 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8swbj\" (UniqueName: \"kubernetes.io/projected/279d2368-abe1-465a-9007-68542e5dbfc4-kube-api-access-8swbj\") pod \"279d2368-abe1-465a-9007-68542e5dbfc4\" (UID: \"279d2368-abe1-465a-9007-68542e5dbfc4\") " Mar 20 16:02:01 crc kubenswrapper[4730]: I0320 16:02:01.223864 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/279d2368-abe1-465a-9007-68542e5dbfc4-scripts\") pod \"279d2368-abe1-465a-9007-68542e5dbfc4\" (UID: \"279d2368-abe1-465a-9007-68542e5dbfc4\") " Mar 20 16:02:01 crc kubenswrapper[4730]: I0320 16:02:01.223921 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/279d2368-abe1-465a-9007-68542e5dbfc4-combined-ca-bundle\") pod \"279d2368-abe1-465a-9007-68542e5dbfc4\" (UID: \"279d2368-abe1-465a-9007-68542e5dbfc4\") " Mar 20 16:02:01 crc kubenswrapper[4730]: I0320 16:02:01.223962 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/279d2368-abe1-465a-9007-68542e5dbfc4-config-data\") pod \"279d2368-abe1-465a-9007-68542e5dbfc4\" (UID: \"279d2368-abe1-465a-9007-68542e5dbfc4\") " Mar 20 16:02:01 crc kubenswrapper[4730]: I0320 16:02:01.243590 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/279d2368-abe1-465a-9007-68542e5dbfc4-kube-api-access-8swbj" (OuterVolumeSpecName: "kube-api-access-8swbj") pod "279d2368-abe1-465a-9007-68542e5dbfc4" (UID: "279d2368-abe1-465a-9007-68542e5dbfc4"). InnerVolumeSpecName "kube-api-access-8swbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:02:01 crc kubenswrapper[4730]: I0320 16:02:01.259563 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/279d2368-abe1-465a-9007-68542e5dbfc4-scripts" (OuterVolumeSpecName: "scripts") pod "279d2368-abe1-465a-9007-68542e5dbfc4" (UID: "279d2368-abe1-465a-9007-68542e5dbfc4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:01 crc kubenswrapper[4730]: I0320 16:02:01.259681 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/279d2368-abe1-465a-9007-68542e5dbfc4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "279d2368-abe1-465a-9007-68542e5dbfc4" (UID: "279d2368-abe1-465a-9007-68542e5dbfc4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:01 crc kubenswrapper[4730]: I0320 16:02:01.263639 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/279d2368-abe1-465a-9007-68542e5dbfc4-config-data" (OuterVolumeSpecName: "config-data") pod "279d2368-abe1-465a-9007-68542e5dbfc4" (UID: "279d2368-abe1-465a-9007-68542e5dbfc4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:01 crc kubenswrapper[4730]: I0320 16:02:01.326544 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8swbj\" (UniqueName: \"kubernetes.io/projected/279d2368-abe1-465a-9007-68542e5dbfc4-kube-api-access-8swbj\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:01 crc kubenswrapper[4730]: I0320 16:02:01.326588 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/279d2368-abe1-465a-9007-68542e5dbfc4-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:01 crc kubenswrapper[4730]: I0320 16:02:01.326602 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/279d2368-abe1-465a-9007-68542e5dbfc4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:01 crc kubenswrapper[4730]: I0320 16:02:01.326613 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/279d2368-abe1-465a-9007-68542e5dbfc4-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:01 crc kubenswrapper[4730]: I0320 16:02:01.716029 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567042-rngpc" event={"ID":"8f1785c3-01b9-48cd-bfc9-c0fdb1c18455","Type":"ContainerStarted","Data":"0195877c2e8f64939a13b8bd295cc860249fcf82aef822acc7e477581c695355"} Mar 20 16:02:01 crc kubenswrapper[4730]: I0320 16:02:01.718048 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qsrbd" event={"ID":"279d2368-abe1-465a-9007-68542e5dbfc4","Type":"ContainerDied","Data":"da2c214fcdb33fae608237a0fb1d01559481f0b67a4afa1e6c930298a64b75a2"} Mar 20 16:02:01 crc kubenswrapper[4730]: I0320 16:02:01.718076 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da2c214fcdb33fae608237a0fb1d01559481f0b67a4afa1e6c930298a64b75a2" Mar 20 16:02:01 crc kubenswrapper[4730]: I0320 16:02:01.718094 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qsrbd" Mar 20 16:02:01 crc kubenswrapper[4730]: I0320 16:02:01.799361 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 20 16:02:01 crc kubenswrapper[4730]: E0320 16:02:01.799750 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="279d2368-abe1-465a-9007-68542e5dbfc4" containerName="nova-cell0-conductor-db-sync" Mar 20 16:02:01 crc kubenswrapper[4730]: I0320 16:02:01.799769 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="279d2368-abe1-465a-9007-68542e5dbfc4" containerName="nova-cell0-conductor-db-sync" Mar 20 16:02:01 crc kubenswrapper[4730]: I0320 16:02:01.799992 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="279d2368-abe1-465a-9007-68542e5dbfc4" containerName="nova-cell0-conductor-db-sync" Mar 20 16:02:01 crc kubenswrapper[4730]: I0320 16:02:01.800804 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 20 16:02:01 crc kubenswrapper[4730]: I0320 16:02:01.805018 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fmsk9" Mar 20 16:02:01 crc kubenswrapper[4730]: I0320 16:02:01.805189 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 20 16:02:01 crc kubenswrapper[4730]: I0320 16:02:01.809201 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 20 16:02:01 crc kubenswrapper[4730]: I0320 16:02:01.939403 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95rw8\" (UniqueName: \"kubernetes.io/projected/0940bcf4-b3ca-4f1d-92df-5fa9f477c800-kube-api-access-95rw8\") pod \"nova-cell0-conductor-0\" (UID: \"0940bcf4-b3ca-4f1d-92df-5fa9f477c800\") " pod="openstack/nova-cell0-conductor-0" Mar 20 16:02:01 crc kubenswrapper[4730]: I0320 16:02:01.939743 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0940bcf4-b3ca-4f1d-92df-5fa9f477c800-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0940bcf4-b3ca-4f1d-92df-5fa9f477c800\") " pod="openstack/nova-cell0-conductor-0" Mar 20 16:02:01 crc kubenswrapper[4730]: I0320 16:02:01.939904 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0940bcf4-b3ca-4f1d-92df-5fa9f477c800-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0940bcf4-b3ca-4f1d-92df-5fa9f477c800\") " pod="openstack/nova-cell0-conductor-0" Mar 20 16:02:02 crc kubenswrapper[4730]: I0320 16:02:02.041767 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95rw8\" (UniqueName: \"kubernetes.io/projected/0940bcf4-b3ca-4f1d-92df-5fa9f477c800-kube-api-access-95rw8\") pod \"nova-cell0-conductor-0\" (UID: \"0940bcf4-b3ca-4f1d-92df-5fa9f477c800\") " pod="openstack/nova-cell0-conductor-0" Mar 20 16:02:02 crc kubenswrapper[4730]: I0320 16:02:02.041899 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0940bcf4-b3ca-4f1d-92df-5fa9f477c800-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0940bcf4-b3ca-4f1d-92df-5fa9f477c800\") " pod="openstack/nova-cell0-conductor-0" Mar 20 16:02:02 crc kubenswrapper[4730]: I0320 16:02:02.041935 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0940bcf4-b3ca-4f1d-92df-5fa9f477c800-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0940bcf4-b3ca-4f1d-92df-5fa9f477c800\") " pod="openstack/nova-cell0-conductor-0" Mar 20 16:02:02 crc kubenswrapper[4730]: I0320 16:02:02.046212 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0940bcf4-b3ca-4f1d-92df-5fa9f477c800-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0940bcf4-b3ca-4f1d-92df-5fa9f477c800\") " pod="openstack/nova-cell0-conductor-0" Mar 20 16:02:02 crc kubenswrapper[4730]: I0320 16:02:02.046855 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0940bcf4-b3ca-4f1d-92df-5fa9f477c800-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0940bcf4-b3ca-4f1d-92df-5fa9f477c800\") " pod="openstack/nova-cell0-conductor-0" Mar 20 16:02:02 crc kubenswrapper[4730]: I0320 16:02:02.066223 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95rw8\" (UniqueName: \"kubernetes.io/projected/0940bcf4-b3ca-4f1d-92df-5fa9f477c800-kube-api-access-95rw8\") pod \"nova-cell0-conductor-0\" (UID: \"0940bcf4-b3ca-4f1d-92df-5fa9f477c800\") " pod="openstack/nova-cell0-conductor-0" Mar 20 16:02:02 crc kubenswrapper[4730]: I0320 16:02:02.123907 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 20 16:02:02 crc kubenswrapper[4730]: I0320 16:02:02.585767 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 20 16:02:02 crc kubenswrapper[4730]: I0320 16:02:02.729685 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0940bcf4-b3ca-4f1d-92df-5fa9f477c800","Type":"ContainerStarted","Data":"b01ec4d0612406942ef5a166c52fecb4325a42f7709f92a49bd937099ba5ff8c"} Mar 20 16:02:02 crc kubenswrapper[4730]: I0320 16:02:02.732885 4730 generic.go:334] "Generic (PLEG): container finished" podID="8f1785c3-01b9-48cd-bfc9-c0fdb1c18455" containerID="807d658a9e1c791073ad6dce59cf86eec477c7d4420c8a363f99c8986963ad00" exitCode=0 Mar 20 16:02:02 crc kubenswrapper[4730]: I0320 16:02:02.733648 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567042-rngpc" event={"ID":"8f1785c3-01b9-48cd-bfc9-c0fdb1c18455","Type":"ContainerDied","Data":"807d658a9e1c791073ad6dce59cf86eec477c7d4420c8a363f99c8986963ad00"} Mar 20 16:02:03 crc kubenswrapper[4730]: I0320 16:02:03.656326 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:02:03 crc kubenswrapper[4730]: I0320 16:02:03.742970 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0940bcf4-b3ca-4f1d-92df-5fa9f477c800","Type":"ContainerStarted","Data":"339ea55d2abfdc249e3164e1df0588dc76b8df8a4a83a8021b6600b6a5471598"} Mar 20 16:02:03 crc kubenswrapper[4730]: I0320 16:02:03.743171 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f03f19a6-9d2e-421e-9388-68ae49ae68ef" containerName="ceilometer-central-agent" containerID="cri-o://9dd9d5de632bf89de261876ec488aa9f35efb63310645d30071a551b02d1b18c" gracePeriod=30 Mar 20 16:02:03 crc kubenswrapper[4730]: I0320 16:02:03.743261 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f03f19a6-9d2e-421e-9388-68ae49ae68ef" containerName="sg-core" containerID="cri-o://fcfdc390791242aab10df223e6f66346cede319e34a43195f06b469deaa0cf18" gracePeriod=30 Mar 20 16:02:03 crc kubenswrapper[4730]: I0320 16:02:03.743222 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f03f19a6-9d2e-421e-9388-68ae49ae68ef" containerName="proxy-httpd" containerID="cri-o://3cdc313706dd8cba3d3a54bac2bfd0a19fc778679c99983f7df6459d6ad53f25" gracePeriod=30 Mar 20 16:02:03 crc kubenswrapper[4730]: I0320 16:02:03.743301 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f03f19a6-9d2e-421e-9388-68ae49ae68ef" containerName="ceilometer-notification-agent" containerID="cri-o://f0057f5dab402a46d249f303cdcb727d64def9624f515e382c4084ea494f079a" gracePeriod=30 Mar 20 16:02:03 crc kubenswrapper[4730]: I0320 16:02:03.769002 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.76898248 podStartE2EDuration="2.76898248s" podCreationTimestamp="2026-03-20 16:02:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:02:03.764580184 +0000 UTC m=+1382.977951563" watchObservedRunningTime="2026-03-20 16:02:03.76898248 +0000 UTC m=+1382.982353849" Mar 20 16:02:04 crc kubenswrapper[4730]: I0320 16:02:04.754678 4730 generic.go:334] "Generic (PLEG): container finished" podID="f03f19a6-9d2e-421e-9388-68ae49ae68ef" containerID="3cdc313706dd8cba3d3a54bac2bfd0a19fc778679c99983f7df6459d6ad53f25" exitCode=0 Mar 20 16:02:04 crc kubenswrapper[4730]: I0320 16:02:04.755150 4730 generic.go:334] "Generic (PLEG): container finished" podID="f03f19a6-9d2e-421e-9388-68ae49ae68ef" containerID="fcfdc390791242aab10df223e6f66346cede319e34a43195f06b469deaa0cf18" exitCode=2 Mar 20 16:02:04 crc kubenswrapper[4730]: I0320 16:02:04.754741 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f03f19a6-9d2e-421e-9388-68ae49ae68ef","Type":"ContainerDied","Data":"3cdc313706dd8cba3d3a54bac2bfd0a19fc778679c99983f7df6459d6ad53f25"} Mar 20 16:02:04 crc kubenswrapper[4730]: I0320 16:02:04.755232 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f03f19a6-9d2e-421e-9388-68ae49ae68ef","Type":"ContainerDied","Data":"fcfdc390791242aab10df223e6f66346cede319e34a43195f06b469deaa0cf18"} Mar 20 16:02:04 crc kubenswrapper[4730]: I0320 16:02:04.755334 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 20 16:02:05 crc kubenswrapper[4730]: I0320 16:02:05.084637 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567042-rngpc" Mar 20 16:02:05 crc kubenswrapper[4730]: I0320 16:02:05.213678 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2grlf\" (UniqueName: \"kubernetes.io/projected/8f1785c3-01b9-48cd-bfc9-c0fdb1c18455-kube-api-access-2grlf\") pod \"8f1785c3-01b9-48cd-bfc9-c0fdb1c18455\" (UID: \"8f1785c3-01b9-48cd-bfc9-c0fdb1c18455\") " Mar 20 16:02:05 crc kubenswrapper[4730]: I0320 16:02:05.220099 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f1785c3-01b9-48cd-bfc9-c0fdb1c18455-kube-api-access-2grlf" (OuterVolumeSpecName: "kube-api-access-2grlf") pod "8f1785c3-01b9-48cd-bfc9-c0fdb1c18455" (UID: "8f1785c3-01b9-48cd-bfc9-c0fdb1c18455"). InnerVolumeSpecName "kube-api-access-2grlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:02:05 crc kubenswrapper[4730]: I0320 16:02:05.316277 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2grlf\" (UniqueName: \"kubernetes.io/projected/8f1785c3-01b9-48cd-bfc9-c0fdb1c18455-kube-api-access-2grlf\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:05 crc kubenswrapper[4730]: I0320 16:02:05.764510 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567042-rngpc" Mar 20 16:02:05 crc kubenswrapper[4730]: I0320 16:02:05.766074 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567042-rngpc" event={"ID":"8f1785c3-01b9-48cd-bfc9-c0fdb1c18455","Type":"ContainerDied","Data":"0195877c2e8f64939a13b8bd295cc860249fcf82aef822acc7e477581c695355"} Mar 20 16:02:05 crc kubenswrapper[4730]: I0320 16:02:05.767309 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0195877c2e8f64939a13b8bd295cc860249fcf82aef822acc7e477581c695355" Mar 20 16:02:05 crc kubenswrapper[4730]: I0320 16:02:05.771735 4730 generic.go:334] "Generic (PLEG): container finished" podID="f03f19a6-9d2e-421e-9388-68ae49ae68ef" containerID="f0057f5dab402a46d249f303cdcb727d64def9624f515e382c4084ea494f079a" exitCode=0 Mar 20 16:02:05 crc kubenswrapper[4730]: I0320 16:02:05.772382 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f03f19a6-9d2e-421e-9388-68ae49ae68ef","Type":"ContainerDied","Data":"f0057f5dab402a46d249f303cdcb727d64def9624f515e382c4084ea494f079a"} Mar 20 16:02:06 crc kubenswrapper[4730]: I0320 16:02:06.146051 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567036-mnkd8"] Mar 20 16:02:06 crc kubenswrapper[4730]: I0320 16:02:06.157098 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567036-mnkd8"] Mar 20 16:02:07 crc kubenswrapper[4730]: I0320 16:02:07.154997 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 20 16:02:07 crc kubenswrapper[4730]: I0320 16:02:07.545849 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="889da12d-843a-4c71-8d48-cbb0360b024a" path="/var/lib/kubelet/pods/889da12d-843a-4c71-8d48-cbb0360b024a/volumes" Mar 20 16:02:07 crc kubenswrapper[4730]: I0320 16:02:07.860292 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-f7rjc"] Mar 20 16:02:07 crc kubenswrapper[4730]: E0320 16:02:07.860987 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1785c3-01b9-48cd-bfc9-c0fdb1c18455" containerName="oc" Mar 20 16:02:07 crc kubenswrapper[4730]: I0320 16:02:07.861014 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1785c3-01b9-48cd-bfc9-c0fdb1c18455" containerName="oc" Mar 20 16:02:07 crc kubenswrapper[4730]: I0320 16:02:07.861317 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f1785c3-01b9-48cd-bfc9-c0fdb1c18455" containerName="oc" Mar 20 16:02:07 crc kubenswrapper[4730]: I0320 16:02:07.862486 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-f7rjc" Mar 20 16:02:07 crc kubenswrapper[4730]: I0320 16:02:07.864901 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 20 16:02:07 crc kubenswrapper[4730]: I0320 16:02:07.875608 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-f7rjc"] Mar 20 16:02:07 crc kubenswrapper[4730]: I0320 16:02:07.906724 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 20 16:02:07 crc kubenswrapper[4730]: I0320 16:02:07.972571 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f144e50-8d18-49a5-a3ef-84b72e6e119f-scripts\") pod \"nova-cell0-cell-mapping-f7rjc\" (UID: \"8f144e50-8d18-49a5-a3ef-84b72e6e119f\") " pod="openstack/nova-cell0-cell-mapping-f7rjc" Mar 20 16:02:07 crc kubenswrapper[4730]: I0320 16:02:07.972683 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f144e50-8d18-49a5-a3ef-84b72e6e119f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-f7rjc\" (UID: \"8f144e50-8d18-49a5-a3ef-84b72e6e119f\") " pod="openstack/nova-cell0-cell-mapping-f7rjc" Mar 20 16:02:07 crc kubenswrapper[4730]: I0320 16:02:07.972768 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f144e50-8d18-49a5-a3ef-84b72e6e119f-config-data\") pod \"nova-cell0-cell-mapping-f7rjc\" (UID: \"8f144e50-8d18-49a5-a3ef-84b72e6e119f\") " pod="openstack/nova-cell0-cell-mapping-f7rjc" Mar 20 16:02:07 crc kubenswrapper[4730]: I0320 16:02:07.972803 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dn84\" (UniqueName: \"kubernetes.io/projected/8f144e50-8d18-49a5-a3ef-84b72e6e119f-kube-api-access-4dn84\") pod \"nova-cell0-cell-mapping-f7rjc\" (UID: \"8f144e50-8d18-49a5-a3ef-84b72e6e119f\") " pod="openstack/nova-cell0-cell-mapping-f7rjc" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.075391 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f144e50-8d18-49a5-a3ef-84b72e6e119f-scripts\") pod \"nova-cell0-cell-mapping-f7rjc\" (UID: \"8f144e50-8d18-49a5-a3ef-84b72e6e119f\") " pod="openstack/nova-cell0-cell-mapping-f7rjc" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.075491 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f144e50-8d18-49a5-a3ef-84b72e6e119f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-f7rjc\" (UID: \"8f144e50-8d18-49a5-a3ef-84b72e6e119f\") " pod="openstack/nova-cell0-cell-mapping-f7rjc" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.075552 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f144e50-8d18-49a5-a3ef-84b72e6e119f-config-data\") pod \"nova-cell0-cell-mapping-f7rjc\" (UID: \"8f144e50-8d18-49a5-a3ef-84b72e6e119f\") " pod="openstack/nova-cell0-cell-mapping-f7rjc" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.075600 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dn84\" (UniqueName: \"kubernetes.io/projected/8f144e50-8d18-49a5-a3ef-84b72e6e119f-kube-api-access-4dn84\") pod \"nova-cell0-cell-mapping-f7rjc\" (UID: \"8f144e50-8d18-49a5-a3ef-84b72e6e119f\") " pod="openstack/nova-cell0-cell-mapping-f7rjc" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.082450 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f144e50-8d18-49a5-a3ef-84b72e6e119f-scripts\") pod \"nova-cell0-cell-mapping-f7rjc\" (UID: \"8f144e50-8d18-49a5-a3ef-84b72e6e119f\") " pod="openstack/nova-cell0-cell-mapping-f7rjc" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.083742 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f144e50-8d18-49a5-a3ef-84b72e6e119f-config-data\") pod \"nova-cell0-cell-mapping-f7rjc\" (UID: \"8f144e50-8d18-49a5-a3ef-84b72e6e119f\") " pod="openstack/nova-cell0-cell-mapping-f7rjc" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.084882 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f144e50-8d18-49a5-a3ef-84b72e6e119f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-f7rjc\" (UID: \"8f144e50-8d18-49a5-a3ef-84b72e6e119f\") " pod="openstack/nova-cell0-cell-mapping-f7rjc" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.102320 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.104136 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.108943 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dn84\" (UniqueName: \"kubernetes.io/projected/8f144e50-8d18-49a5-a3ef-84b72e6e119f-kube-api-access-4dn84\") pod \"nova-cell0-cell-mapping-f7rjc\" (UID: \"8f144e50-8d18-49a5-a3ef-84b72e6e119f\") " pod="openstack/nova-cell0-cell-mapping-f7rjc" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.109193 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.120878 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.122305 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.124772 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.135350 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.151404 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.228205 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-f7rjc" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.280232 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.283148 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.285525 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.289746 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlxxq\" (UniqueName: \"kubernetes.io/projected/01c93007-4d31-47a2-810c-819caf917e43-kube-api-access-nlxxq\") pod \"nova-scheduler-0\" (UID: \"01c93007-4d31-47a2-810c-819caf917e43\") " pod="openstack/nova-scheduler-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.289797 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/585eb246-c6cd-4641-a7fb-86d2ef87e31e-logs\") pod \"nova-api-0\" (UID: \"585eb246-c6cd-4641-a7fb-86d2ef87e31e\") " pod="openstack/nova-api-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.289879 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01c93007-4d31-47a2-810c-819caf917e43-config-data\") pod \"nova-scheduler-0\" (UID: \"01c93007-4d31-47a2-810c-819caf917e43\") " pod="openstack/nova-scheduler-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.289942 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/585eb246-c6cd-4641-a7fb-86d2ef87e31e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"585eb246-c6cd-4641-a7fb-86d2ef87e31e\") " pod="openstack/nova-api-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.290032 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/585eb246-c6cd-4641-a7fb-86d2ef87e31e-config-data\") pod \"nova-api-0\" (UID: \"585eb246-c6cd-4641-a7fb-86d2ef87e31e\") " pod="openstack/nova-api-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.290124 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c93007-4d31-47a2-810c-819caf917e43-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"01c93007-4d31-47a2-810c-819caf917e43\") " pod="openstack/nova-scheduler-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.294716 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9r8b\" (UniqueName: \"kubernetes.io/projected/585eb246-c6cd-4641-a7fb-86d2ef87e31e-kube-api-access-l9r8b\") pod \"nova-api-0\" (UID: \"585eb246-c6cd-4641-a7fb-86d2ef87e31e\") " pod="openstack/nova-api-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.347343 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.397406 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlxxq\" (UniqueName: \"kubernetes.io/projected/01c93007-4d31-47a2-810c-819caf917e43-kube-api-access-nlxxq\") pod \"nova-scheduler-0\" (UID: \"01c93007-4d31-47a2-810c-819caf917e43\") " pod="openstack/nova-scheduler-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.397647 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/585eb246-c6cd-4641-a7fb-86d2ef87e31e-logs\") pod \"nova-api-0\" (UID: \"585eb246-c6cd-4641-a7fb-86d2ef87e31e\") " pod="openstack/nova-api-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.397686 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f1bcc8c-7598-4c25-aaa7-0a9636c0729c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f1bcc8c-7598-4c25-aaa7-0a9636c0729c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.397712 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f1bcc8c-7598-4c25-aaa7-0a9636c0729c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f1bcc8c-7598-4c25-aaa7-0a9636c0729c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.397740 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01c93007-4d31-47a2-810c-819caf917e43-config-data\") pod \"nova-scheduler-0\" (UID: \"01c93007-4d31-47a2-810c-819caf917e43\") " pod="openstack/nova-scheduler-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.397774 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/585eb246-c6cd-4641-a7fb-86d2ef87e31e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"585eb246-c6cd-4641-a7fb-86d2ef87e31e\") " pod="openstack/nova-api-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.397810 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/585eb246-c6cd-4641-a7fb-86d2ef87e31e-config-data\") pod \"nova-api-0\" (UID: \"585eb246-c6cd-4641-a7fb-86d2ef87e31e\") " pod="openstack/nova-api-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.397850 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c93007-4d31-47a2-810c-819caf917e43-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"01c93007-4d31-47a2-810c-819caf917e43\") " pod="openstack/nova-scheduler-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.397874 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmvzz\" (UniqueName: \"kubernetes.io/projected/1f1bcc8c-7598-4c25-aaa7-0a9636c0729c-kube-api-access-zmvzz\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f1bcc8c-7598-4c25-aaa7-0a9636c0729c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.397915 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9r8b\" (UniqueName: \"kubernetes.io/projected/585eb246-c6cd-4641-a7fb-86d2ef87e31e-kube-api-access-l9r8b\") pod \"nova-api-0\" (UID: \"585eb246-c6cd-4641-a7fb-86d2ef87e31e\") " pod="openstack/nova-api-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.403966 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/585eb246-c6cd-4641-a7fb-86d2ef87e31e-logs\") pod \"nova-api-0\" (UID: \"585eb246-c6cd-4641-a7fb-86d2ef87e31e\") " pod="openstack/nova-api-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.415432 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c93007-4d31-47a2-810c-819caf917e43-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"01c93007-4d31-47a2-810c-819caf917e43\") " pod="openstack/nova-scheduler-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.416883 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/585eb246-c6cd-4641-a7fb-86d2ef87e31e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"585eb246-c6cd-4641-a7fb-86d2ef87e31e\") " pod="openstack/nova-api-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.424927 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/585eb246-c6cd-4641-a7fb-86d2ef87e31e-config-data\") pod \"nova-api-0\" (UID: \"585eb246-c6cd-4641-a7fb-86d2ef87e31e\") " pod="openstack/nova-api-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.426192 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01c93007-4d31-47a2-810c-819caf917e43-config-data\") pod \"nova-scheduler-0\" (UID: \"01c93007-4d31-47a2-810c-819caf917e43\") " pod="openstack/nova-scheduler-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.436965 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9r8b\" (UniqueName: \"kubernetes.io/projected/585eb246-c6cd-4641-a7fb-86d2ef87e31e-kube-api-access-l9r8b\") pod \"nova-api-0\" (UID: \"585eb246-c6cd-4641-a7fb-86d2ef87e31e\") " pod="openstack/nova-api-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.445302 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlxxq\" (UniqueName: \"kubernetes.io/projected/01c93007-4d31-47a2-810c-819caf917e43-kube-api-access-nlxxq\") pod \"nova-scheduler-0\" (UID: \"01c93007-4d31-47a2-810c-819caf917e43\") " pod="openstack/nova-scheduler-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.445312 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.447375 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.451197 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.462797 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.505615 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-654455944c-qph9q"] Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.508627 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-654455944c-qph9q" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.511123 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f1bcc8c-7598-4c25-aaa7-0a9636c0729c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f1bcc8c-7598-4c25-aaa7-0a9636c0729c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.511175 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f1bcc8c-7598-4c25-aaa7-0a9636c0729c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f1bcc8c-7598-4c25-aaa7-0a9636c0729c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.511369 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmvzz\" (UniqueName: \"kubernetes.io/projected/1f1bcc8c-7598-4c25-aaa7-0a9636c0729c-kube-api-access-zmvzz\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f1bcc8c-7598-4c25-aaa7-0a9636c0729c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.520323 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f1bcc8c-7598-4c25-aaa7-0a9636c0729c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f1bcc8c-7598-4c25-aaa7-0a9636c0729c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.534162 4730 scope.go:117] "RemoveContainer" containerID="ca3ac5b513d25322badcc2bf19b245d687c9ccf8bff6c35cf5794c95ec2ab964" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.534422 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmvzz\" (UniqueName: \"kubernetes.io/projected/1f1bcc8c-7598-4c25-aaa7-0a9636c0729c-kube-api-access-zmvzz\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f1bcc8c-7598-4c25-aaa7-0a9636c0729c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.535895 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f1bcc8c-7598-4c25-aaa7-0a9636c0729c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f1bcc8c-7598-4c25-aaa7-0a9636c0729c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.540488 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.559367 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-654455944c-qph9q"] Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.563423 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.614287 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-ovsdbserver-sb\") pod \"dnsmasq-dns-654455944c-qph9q\" (UID: \"84937d37-8276-4014-b1ae-bb84547384af\") " pod="openstack/dnsmasq-dns-654455944c-qph9q" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.614462 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-dns-swift-storage-0\") pod \"dnsmasq-dns-654455944c-qph9q\" (UID: \"84937d37-8276-4014-b1ae-bb84547384af\") " pod="openstack/dnsmasq-dns-654455944c-qph9q" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.614499 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwnnn\" (UniqueName: \"kubernetes.io/projected/84937d37-8276-4014-b1ae-bb84547384af-kube-api-access-fwnnn\") pod \"dnsmasq-dns-654455944c-qph9q\" (UID: \"84937d37-8276-4014-b1ae-bb84547384af\") " pod="openstack/dnsmasq-dns-654455944c-qph9q" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.614566 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32ac7762-8088-4313-a883-898b569b9154-logs\") pod \"nova-metadata-0\" (UID: \"32ac7762-8088-4313-a883-898b569b9154\") " pod="openstack/nova-metadata-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.614792 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-config\") pod \"dnsmasq-dns-654455944c-qph9q\" (UID: \"84937d37-8276-4014-b1ae-bb84547384af\") " pod="openstack/dnsmasq-dns-654455944c-qph9q" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.616331 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32ac7762-8088-4313-a883-898b569b9154-config-data\") pod \"nova-metadata-0\" (UID: \"32ac7762-8088-4313-a883-898b569b9154\") " pod="openstack/nova-metadata-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.616401 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-dns-svc\") pod \"dnsmasq-dns-654455944c-qph9q\" (UID: \"84937d37-8276-4014-b1ae-bb84547384af\") " pod="openstack/dnsmasq-dns-654455944c-qph9q" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.616525 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32ac7762-8088-4313-a883-898b569b9154-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"32ac7762-8088-4313-a883-898b569b9154\") " pod="openstack/nova-metadata-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.616554 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88r6q\" (UniqueName: \"kubernetes.io/projected/32ac7762-8088-4313-a883-898b569b9154-kube-api-access-88r6q\") pod \"nova-metadata-0\" (UID: \"32ac7762-8088-4313-a883-898b569b9154\") " pod="openstack/nova-metadata-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.616753 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-ovsdbserver-nb\") pod \"dnsmasq-dns-654455944c-qph9q\" (UID: \"84937d37-8276-4014-b1ae-bb84547384af\") " pod="openstack/dnsmasq-dns-654455944c-qph9q" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.724202 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-dns-swift-storage-0\") pod \"dnsmasq-dns-654455944c-qph9q\" (UID: \"84937d37-8276-4014-b1ae-bb84547384af\") " pod="openstack/dnsmasq-dns-654455944c-qph9q" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.724309 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwnnn\" (UniqueName: \"kubernetes.io/projected/84937d37-8276-4014-b1ae-bb84547384af-kube-api-access-fwnnn\") pod \"dnsmasq-dns-654455944c-qph9q\" (UID: \"84937d37-8276-4014-b1ae-bb84547384af\") " pod="openstack/dnsmasq-dns-654455944c-qph9q" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.724343 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32ac7762-8088-4313-a883-898b569b9154-logs\") pod \"nova-metadata-0\" (UID: \"32ac7762-8088-4313-a883-898b569b9154\") " pod="openstack/nova-metadata-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.724439 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-config\") pod \"dnsmasq-dns-654455944c-qph9q\" (UID: \"84937d37-8276-4014-b1ae-bb84547384af\") " pod="openstack/dnsmasq-dns-654455944c-qph9q" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.724474 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32ac7762-8088-4313-a883-898b569b9154-config-data\") pod \"nova-metadata-0\" (UID: \"32ac7762-8088-4313-a883-898b569b9154\") " pod="openstack/nova-metadata-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.724499 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-dns-svc\") pod \"dnsmasq-dns-654455944c-qph9q\" (UID: \"84937d37-8276-4014-b1ae-bb84547384af\") " pod="openstack/dnsmasq-dns-654455944c-qph9q" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.724572 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32ac7762-8088-4313-a883-898b569b9154-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"32ac7762-8088-4313-a883-898b569b9154\") " pod="openstack/nova-metadata-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.724593 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88r6q\" (UniqueName: \"kubernetes.io/projected/32ac7762-8088-4313-a883-898b569b9154-kube-api-access-88r6q\") pod \"nova-metadata-0\" (UID: \"32ac7762-8088-4313-a883-898b569b9154\") " pod="openstack/nova-metadata-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.727426 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-ovsdbserver-nb\") pod \"dnsmasq-dns-654455944c-qph9q\" (UID: \"84937d37-8276-4014-b1ae-bb84547384af\") " pod="openstack/dnsmasq-dns-654455944c-qph9q" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.727525 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-ovsdbserver-sb\") pod \"dnsmasq-dns-654455944c-qph9q\" (UID: \"84937d37-8276-4014-b1ae-bb84547384af\") " pod="openstack/dnsmasq-dns-654455944c-qph9q" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.737855 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-dns-swift-storage-0\") pod \"dnsmasq-dns-654455944c-qph9q\" (UID: \"84937d37-8276-4014-b1ae-bb84547384af\") " pod="openstack/dnsmasq-dns-654455944c-qph9q" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.738013 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32ac7762-8088-4313-a883-898b569b9154-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"32ac7762-8088-4313-a883-898b569b9154\") " pod="openstack/nova-metadata-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.738771 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-dns-svc\") pod \"dnsmasq-dns-654455944c-qph9q\" (UID: \"84937d37-8276-4014-b1ae-bb84547384af\") " pod="openstack/dnsmasq-dns-654455944c-qph9q" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.739944 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-config\") pod \"dnsmasq-dns-654455944c-qph9q\" (UID: \"84937d37-8276-4014-b1ae-bb84547384af\") " pod="openstack/dnsmasq-dns-654455944c-qph9q" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.740047 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32ac7762-8088-4313-a883-898b569b9154-logs\") pod \"nova-metadata-0\" (UID: \"32ac7762-8088-4313-a883-898b569b9154\") " pod="openstack/nova-metadata-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.740792 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-ovsdbserver-nb\") pod \"dnsmasq-dns-654455944c-qph9q\" (UID: \"84937d37-8276-4014-b1ae-bb84547384af\") " pod="openstack/dnsmasq-dns-654455944c-qph9q" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.741989 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-ovsdbserver-sb\") pod \"dnsmasq-dns-654455944c-qph9q\" (UID: \"84937d37-8276-4014-b1ae-bb84547384af\") " pod="openstack/dnsmasq-dns-654455944c-qph9q" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.745166 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32ac7762-8088-4313-a883-898b569b9154-config-data\") pod \"nova-metadata-0\" (UID: \"32ac7762-8088-4313-a883-898b569b9154\") " pod="openstack/nova-metadata-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.746421 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88r6q\" (UniqueName: \"kubernetes.io/projected/32ac7762-8088-4313-a883-898b569b9154-kube-api-access-88r6q\") pod \"nova-metadata-0\" (UID: \"32ac7762-8088-4313-a883-898b569b9154\") " pod="openstack/nova-metadata-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.768882 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.772641 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwnnn\" (UniqueName: \"kubernetes.io/projected/84937d37-8276-4014-b1ae-bb84547384af-kube-api-access-fwnnn\") pod \"dnsmasq-dns-654455944c-qph9q\" (UID: \"84937d37-8276-4014-b1ae-bb84547384af\") " pod="openstack/dnsmasq-dns-654455944c-qph9q" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.801851 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 16:02:08 crc kubenswrapper[4730]: I0320 16:02:08.842573 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-654455944c-qph9q" Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.029507 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-f7rjc"] Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.179984 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.345056 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.443016 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cdhdz"] Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.445005 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cdhdz" Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.447208 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.447791 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.454717 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cdhdz"] Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.512273 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cdhdz\" (UID: \"ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647\") " pod="openstack/nova-cell1-conductor-db-sync-cdhdz" Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.512313 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnxln\" (UniqueName: \"kubernetes.io/projected/ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647-kube-api-access-jnxln\") pod \"nova-cell1-conductor-db-sync-cdhdz\" (UID: \"ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647\") " pod="openstack/nova-cell1-conductor-db-sync-cdhdz" Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.512343 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647-scripts\") pod \"nova-cell1-conductor-db-sync-cdhdz\" (UID: \"ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647\") " pod="openstack/nova-cell1-conductor-db-sync-cdhdz" Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.512452 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647-config-data\") pod \"nova-cell1-conductor-db-sync-cdhdz\" (UID: \"ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647\") " pod="openstack/nova-cell1-conductor-db-sync-cdhdz" Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.613961 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cdhdz\" (UID: \"ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647\") " pod="openstack/nova-cell1-conductor-db-sync-cdhdz" Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.614230 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnxln\" (UniqueName: \"kubernetes.io/projected/ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647-kube-api-access-jnxln\") pod \"nova-cell1-conductor-db-sync-cdhdz\" (UID: \"ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647\") " pod="openstack/nova-cell1-conductor-db-sync-cdhdz" Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.614270 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647-scripts\") pod \"nova-cell1-conductor-db-sync-cdhdz\" (UID: \"ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647\") " pod="openstack/nova-cell1-conductor-db-sync-cdhdz" Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.614390 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647-config-data\") pod \"nova-cell1-conductor-db-sync-cdhdz\" (UID: \"ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647\") " pod="openstack/nova-cell1-conductor-db-sync-cdhdz" Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.620053 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.644215 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647-scripts\") pod \"nova-cell1-conductor-db-sync-cdhdz\" (UID: \"ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647\") " pod="openstack/nova-cell1-conductor-db-sync-cdhdz" Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.644751 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cdhdz\" (UID: \"ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647\") " pod="openstack/nova-cell1-conductor-db-sync-cdhdz" Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.651721 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnxln\" (UniqueName: \"kubernetes.io/projected/ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647-kube-api-access-jnxln\") pod \"nova-cell1-conductor-db-sync-cdhdz\" (UID: \"ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647\") " pod="openstack/nova-cell1-conductor-db-sync-cdhdz" Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.651868 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647-config-data\") pod \"nova-cell1-conductor-db-sync-cdhdz\" (UID: \"ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647\") " pod="openstack/nova-cell1-conductor-db-sync-cdhdz" Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.692965 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.788786 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cdhdz" Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.845704 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-654455944c-qph9q"] Mar 20 16:02:09 crc kubenswrapper[4730]: W0320 16:02:09.856504 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84937d37_8276_4014_b1ae_bb84547384af.slice/crio-0a2eb8594326a606d45ae0159cba2be47a452aea3e7bbc81624368815408532b WatchSource:0}: Error finding container 0a2eb8594326a606d45ae0159cba2be47a452aea3e7bbc81624368815408532b: Status 404 returned error can't find the container with id 0a2eb8594326a606d45ae0159cba2be47a452aea3e7bbc81624368815408532b Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.856724 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1f1bcc8c-7598-4c25-aaa7-0a9636c0729c","Type":"ContainerStarted","Data":"bb467f0f9108cd2d4075fdbdf95b9f10f944692ab74865e527713ebadb5491f5"} Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.863370 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-f7rjc" event={"ID":"8f144e50-8d18-49a5-a3ef-84b72e6e119f","Type":"ContainerStarted","Data":"712419554ee7980049c60af4c6c43298daddafb72a74d5efc70ba46df50bba0e"} Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.863440 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-f7rjc" event={"ID":"8f144e50-8d18-49a5-a3ef-84b72e6e119f","Type":"ContainerStarted","Data":"e3b08dc7c4abd2dcaea34ec7c791ef1e1b1a8fa905aae37ea25e912f633e57a6"} Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.879954 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3f6c808e-d523-48bd-8ec2-28b625834317","Type":"ContainerStarted","Data":"ed4f7daeff4a5766b3f240bd0c76702ab4bff0db1848809b07121ff30e3a5bfc"} Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.902303 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"01c93007-4d31-47a2-810c-819caf917e43","Type":"ContainerStarted","Data":"3250edd54dccb12be75e701aadcaf4d148acb76b38a455ec3944d6f9f7d0f0e8"} Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.904794 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"585eb246-c6cd-4641-a7fb-86d2ef87e31e","Type":"ContainerStarted","Data":"b2e0e39bb6caaa7438ca18c26b9795ce5981a3e0bae88697a3af45f446cad74c"} Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.905554 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"32ac7762-8088-4313-a883-898b569b9154","Type":"ContainerStarted","Data":"e7a5d0df9d685e0f92bcdac057ecc8056efed293df571627fdc9a7cd3a4ee7ff"} Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.906908 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-f7rjc" podStartSLOduration=2.906880573 podStartE2EDuration="2.906880573s" podCreationTimestamp="2026-03-20 16:02:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:02:09.893650362 +0000 UTC m=+1389.107021741" watchObservedRunningTime="2026-03-20 16:02:09.906880573 +0000 UTC m=+1389.120251942" Mar 20 16:02:09 crc kubenswrapper[4730]: I0320 16:02:09.985322 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 20 16:02:10 crc kubenswrapper[4730]: I0320 16:02:10.166497 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Mar 20 16:02:10 crc kubenswrapper[4730]: I0320 16:02:10.568274 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cdhdz"] Mar 20 16:02:10 crc kubenswrapper[4730]: E0320 16:02:10.857599 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod279d2368_abe1_465a_9007_68542e5dbfc4.slice/crio-da2c214fcdb33fae608237a0fb1d01559481f0b67a4afa1e6c930298a64b75a2\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod279d2368_abe1_465a_9007_68542e5dbfc4.slice\": RecentStats: unable to find data in memory cache]" Mar 20 16:02:10 crc kubenswrapper[4730]: I0320 16:02:10.941542 4730 generic.go:334] "Generic (PLEG): container finished" podID="84937d37-8276-4014-b1ae-bb84547384af" containerID="a42a5cae6611c2fe3d170915f157931f092b64200d58d7b15c079992b8c3bd9a" exitCode=0 Mar 20 16:02:10 crc kubenswrapper[4730]: I0320 16:02:10.942131 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-654455944c-qph9q" event={"ID":"84937d37-8276-4014-b1ae-bb84547384af","Type":"ContainerDied","Data":"a42a5cae6611c2fe3d170915f157931f092b64200d58d7b15c079992b8c3bd9a"} Mar 20 16:02:10 crc kubenswrapper[4730]: I0320 16:02:10.942195 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-654455944c-qph9q" event={"ID":"84937d37-8276-4014-b1ae-bb84547384af","Type":"ContainerStarted","Data":"0a2eb8594326a606d45ae0159cba2be47a452aea3e7bbc81624368815408532b"} Mar 20 16:02:10 crc kubenswrapper[4730]: I0320 16:02:10.942523 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Mar 20 16:02:10 crc kubenswrapper[4730]: I0320 16:02:10.986201 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Mar 20 16:02:11 crc kubenswrapper[4730]: I0320 16:02:11.023068 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 20 16:02:11 crc kubenswrapper[4730]: I0320 16:02:11.957843 4730 generic.go:334] "Generic (PLEG): container finished" podID="f03f19a6-9d2e-421e-9388-68ae49ae68ef" containerID="9dd9d5de632bf89de261876ec488aa9f35efb63310645d30071a551b02d1b18c" exitCode=0 Mar 20 16:02:11 crc kubenswrapper[4730]: I0320 16:02:11.957917 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f03f19a6-9d2e-421e-9388-68ae49ae68ef","Type":"ContainerDied","Data":"9dd9d5de632bf89de261876ec488aa9f35efb63310645d30071a551b02d1b18c"} Mar 20 16:02:12 crc kubenswrapper[4730]: I0320 16:02:12.966573 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="3f6c808e-d523-48bd-8ec2-28b625834317" containerName="watcher-decision-engine" containerID="cri-o://ed4f7daeff4a5766b3f240bd0c76702ab4bff0db1848809b07121ff30e3a5bfc" gracePeriod=30 Mar 20 16:02:13 crc kubenswrapper[4730]: W0320 16:02:13.226287 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddb7b5ed_16a0_4fb8_96d3_df3c9c2cd647.slice/crio-d9f3d982f9ee881581e688def58986a8c7c62dacb4325c11bdceadbe4b6ffa1a WatchSource:0}: Error finding container d9f3d982f9ee881581e688def58986a8c7c62dacb4325c11bdceadbe4b6ffa1a: Status 404 returned error can't find the container with id d9f3d982f9ee881581e688def58986a8c7c62dacb4325c11bdceadbe4b6ffa1a Mar 20 16:02:13 crc kubenswrapper[4730]: I0320 16:02:13.416131 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 16:02:13 crc kubenswrapper[4730]: I0320 16:02:13.432409 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 16:02:13 crc kubenswrapper[4730]: I0320 16:02:13.728086 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 16:02:13 crc kubenswrapper[4730]: I0320 16:02:13.849639 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f03f19a6-9d2e-421e-9388-68ae49ae68ef-sg-core-conf-yaml\") pod \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\" (UID: \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\") " Mar 20 16:02:13 crc kubenswrapper[4730]: I0320 16:02:13.849681 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03f19a6-9d2e-421e-9388-68ae49ae68ef-run-httpd\") pod \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\" (UID: \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\") " Mar 20 16:02:13 crc kubenswrapper[4730]: I0320 16:02:13.849728 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f03f19a6-9d2e-421e-9388-68ae49ae68ef-scripts\") pod \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\" (UID: \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\") " Mar 20 16:02:13 crc kubenswrapper[4730]: I0320 16:02:13.849757 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03f19a6-9d2e-421e-9388-68ae49ae68ef-log-httpd\") pod \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\" (UID: \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\") " Mar 20 16:02:13 crc kubenswrapper[4730]: I0320 16:02:13.849779 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f03f19a6-9d2e-421e-9388-68ae49ae68ef-config-data\") pod \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\" (UID: \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\") " Mar 20 16:02:13 crc kubenswrapper[4730]: I0320 16:02:13.849919 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03f19a6-9d2e-421e-9388-68ae49ae68ef-combined-ca-bundle\") pod \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\" (UID: \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\") " Mar 20 16:02:13 crc kubenswrapper[4730]: I0320 16:02:13.849950 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2lkw\" (UniqueName: \"kubernetes.io/projected/f03f19a6-9d2e-421e-9388-68ae49ae68ef-kube-api-access-w2lkw\") pod \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\" (UID: \"f03f19a6-9d2e-421e-9388-68ae49ae68ef\") " Mar 20 16:02:13 crc kubenswrapper[4730]: I0320 16:02:13.851186 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f03f19a6-9d2e-421e-9388-68ae49ae68ef-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f03f19a6-9d2e-421e-9388-68ae49ae68ef" (UID: "f03f19a6-9d2e-421e-9388-68ae49ae68ef"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:02:13 crc kubenswrapper[4730]: I0320 16:02:13.851437 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f03f19a6-9d2e-421e-9388-68ae49ae68ef-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f03f19a6-9d2e-421e-9388-68ae49ae68ef" (UID: "f03f19a6-9d2e-421e-9388-68ae49ae68ef"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:02:13 crc kubenswrapper[4730]: I0320 16:02:13.861976 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f03f19a6-9d2e-421e-9388-68ae49ae68ef-scripts" (OuterVolumeSpecName: "scripts") pod "f03f19a6-9d2e-421e-9388-68ae49ae68ef" (UID: "f03f19a6-9d2e-421e-9388-68ae49ae68ef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:13 crc kubenswrapper[4730]: I0320 16:02:13.875570 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f03f19a6-9d2e-421e-9388-68ae49ae68ef-kube-api-access-w2lkw" (OuterVolumeSpecName: "kube-api-access-w2lkw") pod "f03f19a6-9d2e-421e-9388-68ae49ae68ef" (UID: "f03f19a6-9d2e-421e-9388-68ae49ae68ef"). InnerVolumeSpecName "kube-api-access-w2lkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:02:13 crc kubenswrapper[4730]: I0320 16:02:13.953869 4730 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03f19a6-9d2e-421e-9388-68ae49ae68ef-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:13 crc kubenswrapper[4730]: I0320 16:02:13.953899 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f03f19a6-9d2e-421e-9388-68ae49ae68ef-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:13 crc kubenswrapper[4730]: I0320 16:02:13.953909 4730 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03f19a6-9d2e-421e-9388-68ae49ae68ef-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:13 crc kubenswrapper[4730]: I0320 16:02:13.953917 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2lkw\" (UniqueName: \"kubernetes.io/projected/f03f19a6-9d2e-421e-9388-68ae49ae68ef-kube-api-access-w2lkw\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:13 crc kubenswrapper[4730]: I0320 16:02:13.984163 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f03f19a6-9d2e-421e-9388-68ae49ae68ef-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f03f19a6-9d2e-421e-9388-68ae49ae68ef" (UID: "f03f19a6-9d2e-421e-9388-68ae49ae68ef"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.000337 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-654455944c-qph9q" event={"ID":"84937d37-8276-4014-b1ae-bb84547384af","Type":"ContainerStarted","Data":"61e03f987a4e7a22fe8153ac2d6c60c19cb72977de582edcb281b2f18aeef951"} Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.000482 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-654455944c-qph9q" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.004024 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"01c93007-4d31-47a2-810c-819caf917e43","Type":"ContainerStarted","Data":"68063e072de62260e0731b15d4d132f06b3322eede18b910d69b064464dffc8d"} Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.008017 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cdhdz" event={"ID":"ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647","Type":"ContainerStarted","Data":"78840c5174380df1ec37853cf5867820b6c6e093e27294e16eb3795a80e9c2a8"} Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.008065 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cdhdz" event={"ID":"ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647","Type":"ContainerStarted","Data":"d9f3d982f9ee881581e688def58986a8c7c62dacb4325c11bdceadbe4b6ffa1a"} Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.010193 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"585eb246-c6cd-4641-a7fb-86d2ef87e31e","Type":"ContainerStarted","Data":"605dab74894b941f8dc47934657056d0f993430ffbd34d8c7cb723a73fb546d4"} Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.029434 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-654455944c-qph9q" podStartSLOduration=6.029321542 podStartE2EDuration="6.029321542s" podCreationTimestamp="2026-03-20 16:02:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:02:14.028455877 +0000 UTC m=+1393.241827246" watchObservedRunningTime="2026-03-20 16:02:14.029321542 +0000 UTC m=+1393.242692911" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.042678 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.043341 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f03f19a6-9d2e-421e-9388-68ae49ae68ef","Type":"ContainerDied","Data":"d3a4ea5bd7c7ce048ff6487734956147a2553ddcde8a7a07f3bf8e3324ca2fc8"} Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.043385 4730 scope.go:117] "RemoveContainer" containerID="3cdc313706dd8cba3d3a54bac2bfd0a19fc778679c99983f7df6459d6ad53f25" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.055734 4730 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f03f19a6-9d2e-421e-9388-68ae49ae68ef-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.058026 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.953331898 podStartE2EDuration="6.058002876s" podCreationTimestamp="2026-03-20 16:02:08 +0000 UTC" firstStartedPulling="2026-03-20 16:02:09.344915449 +0000 UTC m=+1388.558286818" lastFinishedPulling="2026-03-20 16:02:13.449586427 +0000 UTC m=+1392.662957796" observedRunningTime="2026-03-20 16:02:14.052720454 +0000 UTC m=+1393.266091823" watchObservedRunningTime="2026-03-20 16:02:14.058002876 +0000 UTC m=+1393.271374245" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.061914 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"32ac7762-8088-4313-a883-898b569b9154","Type":"ContainerStarted","Data":"9979114478a3a955b42e7b3859791b120a8c87e81eba7cafe092f17caecff80d"} Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.083797 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-cdhdz" podStartSLOduration=5.083778077 podStartE2EDuration="5.083778077s" podCreationTimestamp="2026-03-20 16:02:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:02:14.071738161 +0000 UTC m=+1393.285109530" watchObservedRunningTime="2026-03-20 16:02:14.083778077 +0000 UTC m=+1393.297149446" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.087034 4730 scope.go:117] "RemoveContainer" containerID="fcfdc390791242aab10df223e6f66346cede319e34a43195f06b469deaa0cf18" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.105263 4730 scope.go:117] "RemoveContainer" containerID="f0057f5dab402a46d249f303cdcb727d64def9624f515e382c4084ea494f079a" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.163969 4730 scope.go:117] "RemoveContainer" containerID="9dd9d5de632bf89de261876ec488aa9f35efb63310645d30071a551b02d1b18c" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.200661 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f03f19a6-9d2e-421e-9388-68ae49ae68ef-config-data" (OuterVolumeSpecName: "config-data") pod "f03f19a6-9d2e-421e-9388-68ae49ae68ef" (UID: "f03f19a6-9d2e-421e-9388-68ae49ae68ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.259785 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f03f19a6-9d2e-421e-9388-68ae49ae68ef-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.285343 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f03f19a6-9d2e-421e-9388-68ae49ae68ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f03f19a6-9d2e-421e-9388-68ae49ae68ef" (UID: "f03f19a6-9d2e-421e-9388-68ae49ae68ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.361130 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03f19a6-9d2e-421e-9388-68ae49ae68ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.428011 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.439983 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.470777 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:02:14 crc kubenswrapper[4730]: E0320 16:02:14.471363 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03f19a6-9d2e-421e-9388-68ae49ae68ef" containerName="proxy-httpd" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.471389 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03f19a6-9d2e-421e-9388-68ae49ae68ef" containerName="proxy-httpd" Mar 20 16:02:14 crc kubenswrapper[4730]: E0320 16:02:14.471409 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03f19a6-9d2e-421e-9388-68ae49ae68ef" containerName="sg-core" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.471418 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03f19a6-9d2e-421e-9388-68ae49ae68ef" containerName="sg-core" Mar 20 16:02:14 crc kubenswrapper[4730]: E0320 16:02:14.471450 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03f19a6-9d2e-421e-9388-68ae49ae68ef" containerName="ceilometer-central-agent" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.471460 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03f19a6-9d2e-421e-9388-68ae49ae68ef" containerName="ceilometer-central-agent" Mar 20 16:02:14 crc kubenswrapper[4730]: E0320 16:02:14.471476 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03f19a6-9d2e-421e-9388-68ae49ae68ef" containerName="ceilometer-notification-agent" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.471485 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03f19a6-9d2e-421e-9388-68ae49ae68ef" containerName="ceilometer-notification-agent" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.471882 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f03f19a6-9d2e-421e-9388-68ae49ae68ef" containerName="ceilometer-notification-agent" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.471945 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f03f19a6-9d2e-421e-9388-68ae49ae68ef" containerName="proxy-httpd" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.471962 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f03f19a6-9d2e-421e-9388-68ae49ae68ef" containerName="ceilometer-central-agent" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.471978 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f03f19a6-9d2e-421e-9388-68ae49ae68ef" containerName="sg-core" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.474082 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.483729 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.488421 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.495521 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.585155 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c192a384-369a-4011-bbd0-af10cf958010-log-httpd\") pod \"ceilometer-0\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " pod="openstack/ceilometer-0" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.585226 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c192a384-369a-4011-bbd0-af10cf958010-scripts\") pod \"ceilometer-0\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " pod="openstack/ceilometer-0" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.585353 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c192a384-369a-4011-bbd0-af10cf958010-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " pod="openstack/ceilometer-0" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.585402 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c192a384-369a-4011-bbd0-af10cf958010-config-data\") pod \"ceilometer-0\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " pod="openstack/ceilometer-0" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.585478 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c192a384-369a-4011-bbd0-af10cf958010-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " pod="openstack/ceilometer-0" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.585512 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c192a384-369a-4011-bbd0-af10cf958010-run-httpd\") pod \"ceilometer-0\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " pod="openstack/ceilometer-0" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.585553 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tkk7\" (UniqueName: \"kubernetes.io/projected/c192a384-369a-4011-bbd0-af10cf958010-kube-api-access-2tkk7\") pod \"ceilometer-0\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " pod="openstack/ceilometer-0" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.688998 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c192a384-369a-4011-bbd0-af10cf958010-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " pod="openstack/ceilometer-0" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.689072 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c192a384-369a-4011-bbd0-af10cf958010-run-httpd\") pod \"ceilometer-0\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " pod="openstack/ceilometer-0" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.689131 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tkk7\" (UniqueName: \"kubernetes.io/projected/c192a384-369a-4011-bbd0-af10cf958010-kube-api-access-2tkk7\") pod \"ceilometer-0\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " pod="openstack/ceilometer-0" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.689189 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c192a384-369a-4011-bbd0-af10cf958010-log-httpd\") pod \"ceilometer-0\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " pod="openstack/ceilometer-0" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.689226 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c192a384-369a-4011-bbd0-af10cf958010-scripts\") pod \"ceilometer-0\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " pod="openstack/ceilometer-0" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.689410 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c192a384-369a-4011-bbd0-af10cf958010-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " pod="openstack/ceilometer-0" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.689471 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c192a384-369a-4011-bbd0-af10cf958010-config-data\") pod \"ceilometer-0\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " pod="openstack/ceilometer-0" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.690081 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c192a384-369a-4011-bbd0-af10cf958010-run-httpd\") pod \"ceilometer-0\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " pod="openstack/ceilometer-0" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.691196 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c192a384-369a-4011-bbd0-af10cf958010-log-httpd\") pod \"ceilometer-0\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " pod="openstack/ceilometer-0" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.694957 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c192a384-369a-4011-bbd0-af10cf958010-scripts\") pod \"ceilometer-0\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " pod="openstack/ceilometer-0" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.695832 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c192a384-369a-4011-bbd0-af10cf958010-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " pod="openstack/ceilometer-0" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.696062 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c192a384-369a-4011-bbd0-af10cf958010-config-data\") pod \"ceilometer-0\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " pod="openstack/ceilometer-0" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.698714 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c192a384-369a-4011-bbd0-af10cf958010-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " pod="openstack/ceilometer-0" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.740349 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tkk7\" (UniqueName: \"kubernetes.io/projected/c192a384-369a-4011-bbd0-af10cf958010-kube-api-access-2tkk7\") pod \"ceilometer-0\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " pod="openstack/ceilometer-0" Mar 20 16:02:14 crc kubenswrapper[4730]: I0320 16:02:14.793446 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 16:02:15 crc kubenswrapper[4730]: I0320 16:02:15.072771 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"585eb246-c6cd-4641-a7fb-86d2ef87e31e","Type":"ContainerStarted","Data":"077bbf75eaae181c0e1146dfa3d57949104bdbba5a29e133956ce977816562e9"} Mar 20 16:02:15 crc kubenswrapper[4730]: I0320 16:02:15.077609 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"32ac7762-8088-4313-a883-898b569b9154","Type":"ContainerStarted","Data":"c1159b8c566cff8b809d385b30d15b0c6eaab3536998d984b8f4ce6cd7a26c46"} Mar 20 16:02:15 crc kubenswrapper[4730]: I0320 16:02:15.077751 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="32ac7762-8088-4313-a883-898b569b9154" containerName="nova-metadata-log" containerID="cri-o://9979114478a3a955b42e7b3859791b120a8c87e81eba7cafe092f17caecff80d" gracePeriod=30 Mar 20 16:02:15 crc kubenswrapper[4730]: I0320 16:02:15.078014 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="32ac7762-8088-4313-a883-898b569b9154" containerName="nova-metadata-metadata" containerID="cri-o://c1159b8c566cff8b809d385b30d15b0c6eaab3536998d984b8f4ce6cd7a26c46" gracePeriod=30 Mar 20 16:02:15 crc kubenswrapper[4730]: I0320 16:02:15.087397 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1f1bcc8c-7598-4c25-aaa7-0a9636c0729c","Type":"ContainerStarted","Data":"c51b7b65896e9d86f7a00a27c41da014a980db7c3cc6f6c5953fd8afdde0f897"} Mar 20 16:02:15 crc kubenswrapper[4730]: I0320 16:02:15.090263 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="1f1bcc8c-7598-4c25-aaa7-0a9636c0729c" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://c51b7b65896e9d86f7a00a27c41da014a980db7c3cc6f6c5953fd8afdde0f897" gracePeriod=30 Mar 20 16:02:15 crc kubenswrapper[4730]: I0320 16:02:15.105363 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.934548693 podStartE2EDuration="7.105339282s" podCreationTimestamp="2026-03-20 16:02:08 +0000 UTC" firstStartedPulling="2026-03-20 16:02:09.282276578 +0000 UTC m=+1388.495647947" lastFinishedPulling="2026-03-20 16:02:13.453067167 +0000 UTC m=+1392.666438536" observedRunningTime="2026-03-20 16:02:15.098930778 +0000 UTC m=+1394.312302147" watchObservedRunningTime="2026-03-20 16:02:15.105339282 +0000 UTC m=+1394.318710651" Mar 20 16:02:15 crc kubenswrapper[4730]: I0320 16:02:15.154225 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.376051404 podStartE2EDuration="7.154202607s" podCreationTimestamp="2026-03-20 16:02:08 +0000 UTC" firstStartedPulling="2026-03-20 16:02:09.707599184 +0000 UTC m=+1388.920970553" lastFinishedPulling="2026-03-20 16:02:13.485750387 +0000 UTC m=+1392.699121756" observedRunningTime="2026-03-20 16:02:15.122975319 +0000 UTC m=+1394.336346688" watchObservedRunningTime="2026-03-20 16:02:15.154202607 +0000 UTC m=+1394.367573976" Mar 20 16:02:15 crc kubenswrapper[4730]: I0320 16:02:15.172663 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.338036492 podStartE2EDuration="7.172644427s" podCreationTimestamp="2026-03-20 16:02:08 +0000 UTC" firstStartedPulling="2026-03-20 16:02:09.64655863 +0000 UTC m=+1388.859929999" lastFinishedPulling="2026-03-20 16:02:13.481166565 +0000 UTC m=+1392.694537934" observedRunningTime="2026-03-20 16:02:15.141949394 +0000 UTC m=+1394.355320763" watchObservedRunningTime="2026-03-20 16:02:15.172644427 +0000 UTC m=+1394.386015796" Mar 20 16:02:15 crc kubenswrapper[4730]: I0320 16:02:15.375699 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:02:15 crc kubenswrapper[4730]: I0320 16:02:15.544533 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f03f19a6-9d2e-421e-9388-68ae49ae68ef" path="/var/lib/kubelet/pods/f03f19a6-9d2e-421e-9388-68ae49ae68ef/volumes" Mar 20 16:02:15 crc kubenswrapper[4730]: I0320 16:02:15.840690 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Mar 20 16:02:15 crc kubenswrapper[4730]: I0320 16:02:15.841155 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0" containerName="watcher-applier" containerID="cri-o://06e4300ad2aa0cb045945f92073482ca78016e812eb685af49fb195ce150c681" gracePeriod=30 Mar 20 16:02:15 crc kubenswrapper[4730]: I0320 16:02:15.933050 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Mar 20 16:02:15 crc kubenswrapper[4730]: I0320 16:02:15.933326 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a" containerName="watcher-api-log" containerID="cri-o://c743230ba78e3e74fccce83315455e43e5c471c2e21f3d4ac91af747ed0b8301" gracePeriod=30 Mar 20 16:02:15 crc kubenswrapper[4730]: I0320 16:02:15.933814 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a" containerName="watcher-api" containerID="cri-o://cdc802e9a5716d718ebb13c2b13971164edec56068203d16169206897ba03b6a" gracePeriod=30 Mar 20 16:02:16 crc kubenswrapper[4730]: I0320 16:02:16.136819 4730 generic.go:334] "Generic (PLEG): container finished" podID="32ac7762-8088-4313-a883-898b569b9154" containerID="c1159b8c566cff8b809d385b30d15b0c6eaab3536998d984b8f4ce6cd7a26c46" exitCode=0 Mar 20 16:02:16 crc kubenswrapper[4730]: I0320 16:02:16.136863 4730 generic.go:334] "Generic (PLEG): container finished" podID="32ac7762-8088-4313-a883-898b569b9154" containerID="9979114478a3a955b42e7b3859791b120a8c87e81eba7cafe092f17caecff80d" exitCode=143 Mar 20 16:02:16 crc kubenswrapper[4730]: I0320 16:02:16.136897 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"32ac7762-8088-4313-a883-898b569b9154","Type":"ContainerDied","Data":"c1159b8c566cff8b809d385b30d15b0c6eaab3536998d984b8f4ce6cd7a26c46"} Mar 20 16:02:16 crc kubenswrapper[4730]: I0320 16:02:16.136931 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"32ac7762-8088-4313-a883-898b569b9154","Type":"ContainerDied","Data":"9979114478a3a955b42e7b3859791b120a8c87e81eba7cafe092f17caecff80d"} Mar 20 16:02:16 crc kubenswrapper[4730]: I0320 16:02:16.140917 4730 generic.go:334] "Generic (PLEG): container finished" podID="3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a" containerID="c743230ba78e3e74fccce83315455e43e5c471c2e21f3d4ac91af747ed0b8301" exitCode=143 Mar 20 16:02:16 crc kubenswrapper[4730]: I0320 16:02:16.140971 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a","Type":"ContainerDied","Data":"c743230ba78e3e74fccce83315455e43e5c471c2e21f3d4ac91af747ed0b8301"} Mar 20 16:02:16 crc kubenswrapper[4730]: I0320 16:02:16.160794 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c192a384-369a-4011-bbd0-af10cf958010","Type":"ContainerStarted","Data":"92ca70d767554dd7f5a3c2261603595e47200f286bd84d37815a7187cc55a125"} Mar 20 16:02:16 crc kubenswrapper[4730]: I0320 16:02:16.160881 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c192a384-369a-4011-bbd0-af10cf958010","Type":"ContainerStarted","Data":"fc7185d335545d2bc3730a5a21059470bf6bc2bcae0a41d7ca4c2027aa011079"} Mar 20 16:02:16 crc kubenswrapper[4730]: I0320 16:02:16.188584 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 16:02:16 crc kubenswrapper[4730]: I0320 16:02:16.225797 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32ac7762-8088-4313-a883-898b569b9154-combined-ca-bundle\") pod \"32ac7762-8088-4313-a883-898b569b9154\" (UID: \"32ac7762-8088-4313-a883-898b569b9154\") " Mar 20 16:02:16 crc kubenswrapper[4730]: I0320 16:02:16.225863 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88r6q\" (UniqueName: \"kubernetes.io/projected/32ac7762-8088-4313-a883-898b569b9154-kube-api-access-88r6q\") pod \"32ac7762-8088-4313-a883-898b569b9154\" (UID: \"32ac7762-8088-4313-a883-898b569b9154\") " Mar 20 16:02:16 crc kubenswrapper[4730]: I0320 16:02:16.225943 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32ac7762-8088-4313-a883-898b569b9154-config-data\") pod \"32ac7762-8088-4313-a883-898b569b9154\" (UID: \"32ac7762-8088-4313-a883-898b569b9154\") " Mar 20 16:02:16 crc kubenswrapper[4730]: I0320 16:02:16.226134 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32ac7762-8088-4313-a883-898b569b9154-logs\") pod \"32ac7762-8088-4313-a883-898b569b9154\" (UID: \"32ac7762-8088-4313-a883-898b569b9154\") " Mar 20 16:02:16 crc kubenswrapper[4730]: I0320 16:02:16.226920 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32ac7762-8088-4313-a883-898b569b9154-logs" (OuterVolumeSpecName: "logs") pod "32ac7762-8088-4313-a883-898b569b9154" (UID: "32ac7762-8088-4313-a883-898b569b9154"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:02:16 crc kubenswrapper[4730]: I0320 16:02:16.234858 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32ac7762-8088-4313-a883-898b569b9154-logs\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:16 crc kubenswrapper[4730]: I0320 16:02:16.238609 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32ac7762-8088-4313-a883-898b569b9154-kube-api-access-88r6q" (OuterVolumeSpecName: "kube-api-access-88r6q") pod "32ac7762-8088-4313-a883-898b569b9154" (UID: "32ac7762-8088-4313-a883-898b569b9154"). InnerVolumeSpecName "kube-api-access-88r6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:02:16 crc kubenswrapper[4730]: I0320 16:02:16.266502 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32ac7762-8088-4313-a883-898b569b9154-config-data" (OuterVolumeSpecName: "config-data") pod "32ac7762-8088-4313-a883-898b569b9154" (UID: "32ac7762-8088-4313-a883-898b569b9154"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:16 crc kubenswrapper[4730]: I0320 16:02:16.269624 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32ac7762-8088-4313-a883-898b569b9154-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32ac7762-8088-4313-a883-898b569b9154" (UID: "32ac7762-8088-4313-a883-898b569b9154"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:16 crc kubenswrapper[4730]: I0320 16:02:16.344187 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32ac7762-8088-4313-a883-898b569b9154-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:16 crc kubenswrapper[4730]: I0320 16:02:16.344228 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32ac7762-8088-4313-a883-898b569b9154-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:16 crc kubenswrapper[4730]: I0320 16:02:16.344261 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88r6q\" (UniqueName: \"kubernetes.io/projected/32ac7762-8088-4313-a883-898b569b9154-kube-api-access-88r6q\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.173849 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"32ac7762-8088-4313-a883-898b569b9154","Type":"ContainerDied","Data":"e7a5d0df9d685e0f92bcdac057ecc8056efed293df571627fdc9a7cd3a4ee7ff"} Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.174276 4730 scope.go:117] "RemoveContainer" containerID="c1159b8c566cff8b809d385b30d15b0c6eaab3536998d984b8f4ce6cd7a26c46" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.173870 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.176178 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c192a384-369a-4011-bbd0-af10cf958010","Type":"ContainerStarted","Data":"3ed2c729cfba6a845986581f09c8deca053ba44823152224340ca15c6f3b1018"} Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.285572 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.308288 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.318851 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 20 16:02:17 crc kubenswrapper[4730]: E0320 16:02:17.319455 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ac7762-8088-4313-a883-898b569b9154" containerName="nova-metadata-metadata" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.319474 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ac7762-8088-4313-a883-898b569b9154" containerName="nova-metadata-metadata" Mar 20 16:02:17 crc kubenswrapper[4730]: E0320 16:02:17.319493 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ac7762-8088-4313-a883-898b569b9154" containerName="nova-metadata-log" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.319501 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ac7762-8088-4313-a883-898b569b9154" containerName="nova-metadata-log" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.319700 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="32ac7762-8088-4313-a883-898b569b9154" containerName="nova-metadata-log" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.319722 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="32ac7762-8088-4313-a883-898b569b9154" containerName="nova-metadata-metadata" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.321274 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.323770 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.323778 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.332316 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.340062 4730 scope.go:117] "RemoveContainer" containerID="9979114478a3a955b42e7b3859791b120a8c87e81eba7cafe092f17caecff80d" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.373989 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ctt8\" (UniqueName: \"kubernetes.io/projected/23319f08-4294-49cc-bb24-7a01520e37c6-kube-api-access-7ctt8\") pod \"nova-metadata-0\" (UID: \"23319f08-4294-49cc-bb24-7a01520e37c6\") " pod="openstack/nova-metadata-0" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.374063 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23319f08-4294-49cc-bb24-7a01520e37c6-logs\") pod \"nova-metadata-0\" (UID: \"23319f08-4294-49cc-bb24-7a01520e37c6\") " pod="openstack/nova-metadata-0" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.374444 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/23319f08-4294-49cc-bb24-7a01520e37c6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"23319f08-4294-49cc-bb24-7a01520e37c6\") " pod="openstack/nova-metadata-0" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.374528 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23319f08-4294-49cc-bb24-7a01520e37c6-config-data\") pod \"nova-metadata-0\" (UID: \"23319f08-4294-49cc-bb24-7a01520e37c6\") " pod="openstack/nova-metadata-0" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.374611 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23319f08-4294-49cc-bb24-7a01520e37c6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"23319f08-4294-49cc-bb24-7a01520e37c6\") " pod="openstack/nova-metadata-0" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.476236 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/23319f08-4294-49cc-bb24-7a01520e37c6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"23319f08-4294-49cc-bb24-7a01520e37c6\") " pod="openstack/nova-metadata-0" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.477151 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23319f08-4294-49cc-bb24-7a01520e37c6-config-data\") pod \"nova-metadata-0\" (UID: \"23319f08-4294-49cc-bb24-7a01520e37c6\") " pod="openstack/nova-metadata-0" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.477195 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23319f08-4294-49cc-bb24-7a01520e37c6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"23319f08-4294-49cc-bb24-7a01520e37c6\") " pod="openstack/nova-metadata-0" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.477233 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ctt8\" (UniqueName: \"kubernetes.io/projected/23319f08-4294-49cc-bb24-7a01520e37c6-kube-api-access-7ctt8\") pod \"nova-metadata-0\" (UID: \"23319f08-4294-49cc-bb24-7a01520e37c6\") " pod="openstack/nova-metadata-0" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.477273 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23319f08-4294-49cc-bb24-7a01520e37c6-logs\") pod \"nova-metadata-0\" (UID: \"23319f08-4294-49cc-bb24-7a01520e37c6\") " pod="openstack/nova-metadata-0" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.477761 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23319f08-4294-49cc-bb24-7a01520e37c6-logs\") pod \"nova-metadata-0\" (UID: \"23319f08-4294-49cc-bb24-7a01520e37c6\") " pod="openstack/nova-metadata-0" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.482437 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/23319f08-4294-49cc-bb24-7a01520e37c6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"23319f08-4294-49cc-bb24-7a01520e37c6\") " pod="openstack/nova-metadata-0" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.487155 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23319f08-4294-49cc-bb24-7a01520e37c6-config-data\") pod \"nova-metadata-0\" (UID: \"23319f08-4294-49cc-bb24-7a01520e37c6\") " pod="openstack/nova-metadata-0" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.505847 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23319f08-4294-49cc-bb24-7a01520e37c6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"23319f08-4294-49cc-bb24-7a01520e37c6\") " pod="openstack/nova-metadata-0" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.525092 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ctt8\" (UniqueName: \"kubernetes.io/projected/23319f08-4294-49cc-bb24-7a01520e37c6-kube-api-access-7ctt8\") pod \"nova-metadata-0\" (UID: \"23319f08-4294-49cc-bb24-7a01520e37c6\") " pod="openstack/nova-metadata-0" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.548058 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32ac7762-8088-4313-a883-898b569b9154" path="/var/lib/kubelet/pods/32ac7762-8088-4313-a883-898b569b9154/volumes" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.648879 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.760889 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.886088 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-public-tls-certs\") pod \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\" (UID: \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\") " Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.886399 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk5wd\" (UniqueName: \"kubernetes.io/projected/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-kube-api-access-fk5wd\") pod \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\" (UID: \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\") " Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.886435 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-config-data\") pod \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\" (UID: \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\") " Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.886509 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-combined-ca-bundle\") pod \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\" (UID: \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\") " Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.886553 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-internal-tls-certs\") pod \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\" (UID: \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\") " Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.886679 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-logs\") pod \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\" (UID: \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\") " Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.886771 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-custom-prometheus-ca\") pod \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\" (UID: \"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a\") " Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.892560 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-logs" (OuterVolumeSpecName: "logs") pod "3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a" (UID: "3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.904847 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-kube-api-access-fk5wd" (OuterVolumeSpecName: "kube-api-access-fk5wd") pod "3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a" (UID: "3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a"). InnerVolumeSpecName "kube-api-access-fk5wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.920105 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a" (UID: "3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.951372 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a" (UID: "3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.957429 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a" (UID: "3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.972024 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a" (UID: "3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.982695 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-config-data" (OuterVolumeSpecName: "config-data") pod "3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a" (UID: "3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.989447 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.989479 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.989492 4730 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.989503 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-logs\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.989513 4730 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.989524 4730 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:17 crc kubenswrapper[4730]: I0320 16:02:17.989534 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk5wd\" (UniqueName: \"kubernetes.io/projected/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a-kube-api-access-fk5wd\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.187415 4730 generic.go:334] "Generic (PLEG): container finished" podID="3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a" containerID="cdc802e9a5716d718ebb13c2b13971164edec56068203d16169206897ba03b6a" exitCode=0 Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.187464 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a","Type":"ContainerDied","Data":"cdc802e9a5716d718ebb13c2b13971164edec56068203d16169206897ba03b6a"} Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.187494 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.187530 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a","Type":"ContainerDied","Data":"4c2c6a630c73d61868e537787c0ffc30dd54b3a6ece7e73f02c494b3afd3c924"} Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.187553 4730 scope.go:117] "RemoveContainer" containerID="cdc802e9a5716d718ebb13c2b13971164edec56068203d16169206897ba03b6a" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.191487 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c192a384-369a-4011-bbd0-af10cf958010","Type":"ContainerStarted","Data":"67e70a210d20ab3fb6eb013ac35769c758e3dbe555ae4869a1f308e1ce50d12f"} Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.213660 4730 scope.go:117] "RemoveContainer" containerID="c743230ba78e3e74fccce83315455e43e5c471c2e21f3d4ac91af747ed0b8301" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.233401 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.262481 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.274735 4730 scope.go:117] "RemoveContainer" containerID="cdc802e9a5716d718ebb13c2b13971164edec56068203d16169206897ba03b6a" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.276279 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.291795 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Mar 20 16:02:18 crc kubenswrapper[4730]: E0320 16:02:18.292695 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a" containerName="watcher-api-log" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.292720 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a" containerName="watcher-api-log" Mar 20 16:02:18 crc kubenswrapper[4730]: E0320 16:02:18.292769 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a" containerName="watcher-api" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.292781 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a" containerName="watcher-api" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.293003 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a" containerName="watcher-api" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.293028 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a" containerName="watcher-api-log" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.294564 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.298183 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.298418 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.298545 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Mar 20 16:02:18 crc kubenswrapper[4730]: E0320 16:02:18.305176 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdc802e9a5716d718ebb13c2b13971164edec56068203d16169206897ba03b6a\": container with ID starting with cdc802e9a5716d718ebb13c2b13971164edec56068203d16169206897ba03b6a not found: ID does not exist" containerID="cdc802e9a5716d718ebb13c2b13971164edec56068203d16169206897ba03b6a" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.305403 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdc802e9a5716d718ebb13c2b13971164edec56068203d16169206897ba03b6a"} err="failed to get container status \"cdc802e9a5716d718ebb13c2b13971164edec56068203d16169206897ba03b6a\": rpc error: code = NotFound desc = could not find container \"cdc802e9a5716d718ebb13c2b13971164edec56068203d16169206897ba03b6a\": container with ID starting with cdc802e9a5716d718ebb13c2b13971164edec56068203d16169206897ba03b6a not found: ID does not exist" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.305432 4730 scope.go:117] "RemoveContainer" containerID="c743230ba78e3e74fccce83315455e43e5c471c2e21f3d4ac91af747ed0b8301" Mar 20 16:02:18 crc kubenswrapper[4730]: E0320 16:02:18.309062 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c743230ba78e3e74fccce83315455e43e5c471c2e21f3d4ac91af747ed0b8301\": container with ID starting with c743230ba78e3e74fccce83315455e43e5c471c2e21f3d4ac91af747ed0b8301 not found: ID does not exist" containerID="c743230ba78e3e74fccce83315455e43e5c471c2e21f3d4ac91af747ed0b8301" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.309097 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c743230ba78e3e74fccce83315455e43e5c471c2e21f3d4ac91af747ed0b8301"} err="failed to get container status \"c743230ba78e3e74fccce83315455e43e5c471c2e21f3d4ac91af747ed0b8301\": rpc error: code = NotFound desc = could not find container \"c743230ba78e3e74fccce83315455e43e5c471c2e21f3d4ac91af747ed0b8301\": container with ID starting with c743230ba78e3e74fccce83315455e43e5c471c2e21f3d4ac91af747ed0b8301 not found: ID does not exist" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.310320 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.399488 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba310e23-3097-4114-8628-4e7ada94eac6-config-data\") pod \"watcher-api-0\" (UID: \"ba310e23-3097-4114-8628-4e7ada94eac6\") " pod="openstack/watcher-api-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.399561 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba310e23-3097-4114-8628-4e7ada94eac6-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"ba310e23-3097-4114-8628-4e7ada94eac6\") " pod="openstack/watcher-api-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.399911 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba310e23-3097-4114-8628-4e7ada94eac6-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"ba310e23-3097-4114-8628-4e7ada94eac6\") " pod="openstack/watcher-api-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.399977 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ba310e23-3097-4114-8628-4e7ada94eac6-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"ba310e23-3097-4114-8628-4e7ada94eac6\") " pod="openstack/watcher-api-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.400235 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba310e23-3097-4114-8628-4e7ada94eac6-logs\") pod \"watcher-api-0\" (UID: \"ba310e23-3097-4114-8628-4e7ada94eac6\") " pod="openstack/watcher-api-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.400319 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-252wn\" (UniqueName: \"kubernetes.io/projected/ba310e23-3097-4114-8628-4e7ada94eac6-kube-api-access-252wn\") pod \"watcher-api-0\" (UID: \"ba310e23-3097-4114-8628-4e7ada94eac6\") " pod="openstack/watcher-api-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.400354 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba310e23-3097-4114-8628-4e7ada94eac6-public-tls-certs\") pod \"watcher-api-0\" (UID: \"ba310e23-3097-4114-8628-4e7ada94eac6\") " pod="openstack/watcher-api-0" Mar 20 16:02:18 crc kubenswrapper[4730]: E0320 16:02:18.405975 4730 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="06e4300ad2aa0cb045945f92073482ca78016e812eb685af49fb195ce150c681" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 20 16:02:18 crc kubenswrapper[4730]: E0320 16:02:18.407803 4730 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="06e4300ad2aa0cb045945f92073482ca78016e812eb685af49fb195ce150c681" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 20 16:02:18 crc kubenswrapper[4730]: E0320 16:02:18.409214 4730 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="06e4300ad2aa0cb045945f92073482ca78016e812eb685af49fb195ce150c681" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 20 16:02:18 crc kubenswrapper[4730]: E0320 16:02:18.409317 4730 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0" containerName="watcher-applier" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.502602 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba310e23-3097-4114-8628-4e7ada94eac6-logs\") pod \"watcher-api-0\" (UID: \"ba310e23-3097-4114-8628-4e7ada94eac6\") " pod="openstack/watcher-api-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.502675 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-252wn\" (UniqueName: \"kubernetes.io/projected/ba310e23-3097-4114-8628-4e7ada94eac6-kube-api-access-252wn\") pod \"watcher-api-0\" (UID: \"ba310e23-3097-4114-8628-4e7ada94eac6\") " pod="openstack/watcher-api-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.502703 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba310e23-3097-4114-8628-4e7ada94eac6-public-tls-certs\") pod \"watcher-api-0\" (UID: \"ba310e23-3097-4114-8628-4e7ada94eac6\") " pod="openstack/watcher-api-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.502748 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba310e23-3097-4114-8628-4e7ada94eac6-config-data\") pod \"watcher-api-0\" (UID: \"ba310e23-3097-4114-8628-4e7ada94eac6\") " pod="openstack/watcher-api-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.502815 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba310e23-3097-4114-8628-4e7ada94eac6-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"ba310e23-3097-4114-8628-4e7ada94eac6\") " pod="openstack/watcher-api-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.502901 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba310e23-3097-4114-8628-4e7ada94eac6-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"ba310e23-3097-4114-8628-4e7ada94eac6\") " pod="openstack/watcher-api-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.502931 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ba310e23-3097-4114-8628-4e7ada94eac6-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"ba310e23-3097-4114-8628-4e7ada94eac6\") " pod="openstack/watcher-api-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.506767 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba310e23-3097-4114-8628-4e7ada94eac6-logs\") pod \"watcher-api-0\" (UID: \"ba310e23-3097-4114-8628-4e7ada94eac6\") " pod="openstack/watcher-api-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.507126 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba310e23-3097-4114-8628-4e7ada94eac6-public-tls-certs\") pod \"watcher-api-0\" (UID: \"ba310e23-3097-4114-8628-4e7ada94eac6\") " pod="openstack/watcher-api-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.507599 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba310e23-3097-4114-8628-4e7ada94eac6-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"ba310e23-3097-4114-8628-4e7ada94eac6\") " pod="openstack/watcher-api-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.509792 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ba310e23-3097-4114-8628-4e7ada94eac6-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"ba310e23-3097-4114-8628-4e7ada94eac6\") " pod="openstack/watcher-api-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.510236 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba310e23-3097-4114-8628-4e7ada94eac6-config-data\") pod \"watcher-api-0\" (UID: \"ba310e23-3097-4114-8628-4e7ada94eac6\") " pod="openstack/watcher-api-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.513746 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba310e23-3097-4114-8628-4e7ada94eac6-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"ba310e23-3097-4114-8628-4e7ada94eac6\") " pod="openstack/watcher-api-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.524111 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-252wn\" (UniqueName: \"kubernetes.io/projected/ba310e23-3097-4114-8628-4e7ada94eac6-kube-api-access-252wn\") pod \"watcher-api-0\" (UID: \"ba310e23-3097-4114-8628-4e7ada94eac6\") " pod="openstack/watcher-api-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.541655 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.541914 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.564933 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.566226 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.614902 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.622629 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.771232 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.853382 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-654455944c-qph9q" Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.938332 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76cb94d47c-txmh6"] Mar 20 16:02:18 crc kubenswrapper[4730]: I0320 16:02:18.938570 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" podUID="ff335b2a-909a-4c39-a045-2267c73ac8b2" containerName="dnsmasq-dns" containerID="cri-o://1a5f9090983f451fb355cec1fdd913ee82666d2f07f611bcec31819f098f8126" gracePeriod=10 Mar 20 16:02:19 crc kubenswrapper[4730]: I0320 16:02:19.221981 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"23319f08-4294-49cc-bb24-7a01520e37c6","Type":"ContainerStarted","Data":"a527ffbd2a54fec297ae5f5968aa11134c7ecf139f953464b50d8aaf8a48e15d"} Mar 20 16:02:19 crc kubenswrapper[4730]: I0320 16:02:19.222262 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"23319f08-4294-49cc-bb24-7a01520e37c6","Type":"ContainerStarted","Data":"a0c082b53bef3328cda841db37115f58cfc064af02b9d184944305bba914d52b"} Mar 20 16:02:19 crc kubenswrapper[4730]: I0320 16:02:19.222272 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"23319f08-4294-49cc-bb24-7a01520e37c6","Type":"ContainerStarted","Data":"1a4e509ba032eff859c3dcc5260a9f2631d938aec766c95550415d37207eae1a"} Mar 20 16:02:19 crc kubenswrapper[4730]: I0320 16:02:19.253943 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.253916722 podStartE2EDuration="2.253916722s" podCreationTimestamp="2026-03-20 16:02:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:02:19.244484111 +0000 UTC m=+1398.457855480" watchObservedRunningTime="2026-03-20 16:02:19.253916722 +0000 UTC m=+1398.467288091" Mar 20 16:02:19 crc kubenswrapper[4730]: I0320 16:02:19.262932 4730 generic.go:334] "Generic (PLEG): container finished" podID="ff335b2a-909a-4c39-a045-2267c73ac8b2" containerID="1a5f9090983f451fb355cec1fdd913ee82666d2f07f611bcec31819f098f8126" exitCode=0 Mar 20 16:02:19 crc kubenswrapper[4730]: I0320 16:02:19.263937 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" event={"ID":"ff335b2a-909a-4c39-a045-2267c73ac8b2","Type":"ContainerDied","Data":"1a5f9090983f451fb355cec1fdd913ee82666d2f07f611bcec31819f098f8126"} Mar 20 16:02:19 crc kubenswrapper[4730]: I0320 16:02:19.306845 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 20 16:02:19 crc kubenswrapper[4730]: I0320 16:02:19.392155 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Mar 20 16:02:19 crc kubenswrapper[4730]: I0320 16:02:19.562682 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a" path="/var/lib/kubelet/pods/3a179d19-c83c-4bc1-b74a-d77d6a6b7d2a/volumes" Mar 20 16:02:19 crc kubenswrapper[4730]: I0320 16:02:19.627856 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="585eb246-c6cd-4641-a7fb-86d2ef87e31e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 16:02:19 crc kubenswrapper[4730]: I0320 16:02:19.628395 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="585eb246-c6cd-4641-a7fb-86d2ef87e31e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 16:02:19 crc kubenswrapper[4730]: I0320 16:02:19.734653 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" Mar 20 16:02:19 crc kubenswrapper[4730]: I0320 16:02:19.770049 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-config\") pod \"ff335b2a-909a-4c39-a045-2267c73ac8b2\" (UID: \"ff335b2a-909a-4c39-a045-2267c73ac8b2\") " Mar 20 16:02:19 crc kubenswrapper[4730]: I0320 16:02:19.770166 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-ovsdbserver-sb\") pod \"ff335b2a-909a-4c39-a045-2267c73ac8b2\" (UID: \"ff335b2a-909a-4c39-a045-2267c73ac8b2\") " Mar 20 16:02:19 crc kubenswrapper[4730]: I0320 16:02:19.770199 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-dns-svc\") pod \"ff335b2a-909a-4c39-a045-2267c73ac8b2\" (UID: \"ff335b2a-909a-4c39-a045-2267c73ac8b2\") " Mar 20 16:02:19 crc kubenswrapper[4730]: I0320 16:02:19.770226 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7njgl\" (UniqueName: \"kubernetes.io/projected/ff335b2a-909a-4c39-a045-2267c73ac8b2-kube-api-access-7njgl\") pod \"ff335b2a-909a-4c39-a045-2267c73ac8b2\" (UID: \"ff335b2a-909a-4c39-a045-2267c73ac8b2\") " Mar 20 16:02:19 crc kubenswrapper[4730]: I0320 16:02:19.770344 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-ovsdbserver-nb\") pod \"ff335b2a-909a-4c39-a045-2267c73ac8b2\" (UID: \"ff335b2a-909a-4c39-a045-2267c73ac8b2\") " Mar 20 16:02:19 crc kubenswrapper[4730]: I0320 16:02:19.770376 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-dns-swift-storage-0\") pod \"ff335b2a-909a-4c39-a045-2267c73ac8b2\" (UID: \"ff335b2a-909a-4c39-a045-2267c73ac8b2\") " Mar 20 16:02:19 crc kubenswrapper[4730]: I0320 16:02:19.775333 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff335b2a-909a-4c39-a045-2267c73ac8b2-kube-api-access-7njgl" (OuterVolumeSpecName: "kube-api-access-7njgl") pod "ff335b2a-909a-4c39-a045-2267c73ac8b2" (UID: "ff335b2a-909a-4c39-a045-2267c73ac8b2"). InnerVolumeSpecName "kube-api-access-7njgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:02:19 crc kubenswrapper[4730]: I0320 16:02:19.839073 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ff335b2a-909a-4c39-a045-2267c73ac8b2" (UID: "ff335b2a-909a-4c39-a045-2267c73ac8b2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:02:19 crc kubenswrapper[4730]: I0320 16:02:19.873783 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7njgl\" (UniqueName: \"kubernetes.io/projected/ff335b2a-909a-4c39-a045-2267c73ac8b2-kube-api-access-7njgl\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:19 crc kubenswrapper[4730]: I0320 16:02:19.873818 4730 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.046104 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.047322 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ff335b2a-909a-4c39-a045-2267c73ac8b2" (UID: "ff335b2a-909a-4c39-a045-2267c73ac8b2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.066490 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ff335b2a-909a-4c39-a045-2267c73ac8b2" (UID: "ff335b2a-909a-4c39-a045-2267c73ac8b2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.066593 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ff335b2a-909a-4c39-a045-2267c73ac8b2" (UID: "ff335b2a-909a-4c39-a045-2267c73ac8b2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.078459 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0-config-data\") pod \"5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0\" (UID: \"5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0\") " Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.078659 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0-logs\") pod \"5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0\" (UID: \"5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0\") " Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.078732 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0-combined-ca-bundle\") pod \"5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0\" (UID: \"5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0\") " Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.078817 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djjz2\" (UniqueName: \"kubernetes.io/projected/5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0-kube-api-access-djjz2\") pod \"5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0\" (UID: \"5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0\") " Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.079402 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.079422 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.079433 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.079443 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0-logs" (OuterVolumeSpecName: "logs") pod "5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0" (UID: "5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.080715 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-config" (OuterVolumeSpecName: "config") pod "ff335b2a-909a-4c39-a045-2267c73ac8b2" (UID: "ff335b2a-909a-4c39-a045-2267c73ac8b2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.095019 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0-kube-api-access-djjz2" (OuterVolumeSpecName: "kube-api-access-djjz2") pod "5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0" (UID: "5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0"). InnerVolumeSpecName "kube-api-access-djjz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.171692 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0" (UID: "5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.173833 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0-config-data" (OuterVolumeSpecName: "config-data") pod "5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0" (UID: "5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.181170 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.181205 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0-logs\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.181215 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.181226 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djjz2\" (UniqueName: \"kubernetes.io/projected/5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0-kube-api-access-djjz2\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.181237 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff335b2a-909a-4c39-a045-2267c73ac8b2-config\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.276821 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c192a384-369a-4011-bbd0-af10cf958010","Type":"ContainerStarted","Data":"2e40439ca8128c74c4bde82b12fa3c90a57c5ca9aa2ec703f4bb34d1783c4a41"} Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.278453 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.286963 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" event={"ID":"ff335b2a-909a-4c39-a045-2267c73ac8b2","Type":"ContainerDied","Data":"bb5085b7efdc43fccc1330db4f8eeb5dbaa633cfdd11172926159e432c750243"} Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.287022 4730 scope.go:117] "RemoveContainer" containerID="1a5f9090983f451fb355cec1fdd913ee82666d2f07f611bcec31819f098f8126" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.287143 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76cb94d47c-txmh6" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.293988 4730 generic.go:334] "Generic (PLEG): container finished" podID="5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0" containerID="06e4300ad2aa0cb045945f92073482ca78016e812eb685af49fb195ce150c681" exitCode=0 Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.294107 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.294789 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0","Type":"ContainerDied","Data":"06e4300ad2aa0cb045945f92073482ca78016e812eb685af49fb195ce150c681"} Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.294833 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0","Type":"ContainerDied","Data":"505e09709aeaef8a87d638031cf49a2f3e2fbdc9ec33e31ca6d7f0d3b9378532"} Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.307370 4730 generic.go:334] "Generic (PLEG): container finished" podID="8f144e50-8d18-49a5-a3ef-84b72e6e119f" containerID="712419554ee7980049c60af4c6c43298daddafb72a74d5efc70ba46df50bba0e" exitCode=0 Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.307450 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-f7rjc" event={"ID":"8f144e50-8d18-49a5-a3ef-84b72e6e119f","Type":"ContainerDied","Data":"712419554ee7980049c60af4c6c43298daddafb72a74d5efc70ba46df50bba0e"} Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.320074 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.111080671 podStartE2EDuration="6.320043247s" podCreationTimestamp="2026-03-20 16:02:14 +0000 UTC" firstStartedPulling="2026-03-20 16:02:15.379929095 +0000 UTC m=+1394.593300464" lastFinishedPulling="2026-03-20 16:02:19.588891671 +0000 UTC m=+1398.802263040" observedRunningTime="2026-03-20 16:02:20.302649397 +0000 UTC m=+1399.516020766" watchObservedRunningTime="2026-03-20 16:02:20.320043247 +0000 UTC m=+1399.533414616" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.329692 4730 scope.go:117] "RemoveContainer" containerID="e17867e23320294cb0b9160d421d696c692c189cb8d85e17cafc60d15c897466" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.337552 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"ba310e23-3097-4114-8628-4e7ada94eac6","Type":"ContainerStarted","Data":"2383b07d3519cde613c64363468a9df1e002f0bdb657587ae10eb916ce433034"} Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.337613 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"ba310e23-3097-4114-8628-4e7ada94eac6","Type":"ContainerStarted","Data":"ae8a9c3901658c0acca61c344520be9c6936365074f0cf75db8856e5b7cb9951"} Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.373145 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76cb94d47c-txmh6"] Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.388330 4730 scope.go:117] "RemoveContainer" containerID="06e4300ad2aa0cb045945f92073482ca78016e812eb685af49fb195ce150c681" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.394423 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76cb94d47c-txmh6"] Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.407355 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.440045 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.440120 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Mar 20 16:02:20 crc kubenswrapper[4730]: E0320 16:02:20.441530 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0" containerName="watcher-applier" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.441554 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0" containerName="watcher-applier" Mar 20 16:02:20 crc kubenswrapper[4730]: E0320 16:02:20.441586 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff335b2a-909a-4c39-a045-2267c73ac8b2" containerName="init" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.441594 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff335b2a-909a-4c39-a045-2267c73ac8b2" containerName="init" Mar 20 16:02:20 crc kubenswrapper[4730]: E0320 16:02:20.441617 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff335b2a-909a-4c39-a045-2267c73ac8b2" containerName="dnsmasq-dns" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.441622 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff335b2a-909a-4c39-a045-2267c73ac8b2" containerName="dnsmasq-dns" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.441817 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0" containerName="watcher-applier" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.441846 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff335b2a-909a-4c39-a045-2267c73ac8b2" containerName="dnsmasq-dns" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.442539 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.445342 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.451002 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.477442 4730 scope.go:117] "RemoveContainer" containerID="06e4300ad2aa0cb045945f92073482ca78016e812eb685af49fb195ce150c681" Mar 20 16:02:20 crc kubenswrapper[4730]: E0320 16:02:20.479192 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06e4300ad2aa0cb045945f92073482ca78016e812eb685af49fb195ce150c681\": container with ID starting with 06e4300ad2aa0cb045945f92073482ca78016e812eb685af49fb195ce150c681 not found: ID does not exist" containerID="06e4300ad2aa0cb045945f92073482ca78016e812eb685af49fb195ce150c681" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.479234 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06e4300ad2aa0cb045945f92073482ca78016e812eb685af49fb195ce150c681"} err="failed to get container status \"06e4300ad2aa0cb045945f92073482ca78016e812eb685af49fb195ce150c681\": rpc error: code = NotFound desc = could not find container \"06e4300ad2aa0cb045945f92073482ca78016e812eb685af49fb195ce150c681\": container with ID starting with 06e4300ad2aa0cb045945f92073482ca78016e812eb685af49fb195ce150c681 not found: ID does not exist" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.488097 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d05b7e1-a651-404e-89e9-8276427610fc-logs\") pod \"watcher-applier-0\" (UID: \"0d05b7e1-a651-404e-89e9-8276427610fc\") " pod="openstack/watcher-applier-0" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.488148 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d05b7e1-a651-404e-89e9-8276427610fc-config-data\") pod \"watcher-applier-0\" (UID: \"0d05b7e1-a651-404e-89e9-8276427610fc\") " pod="openstack/watcher-applier-0" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.488239 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d05b7e1-a651-404e-89e9-8276427610fc-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"0d05b7e1-a651-404e-89e9-8276427610fc\") " pod="openstack/watcher-applier-0" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.488317 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79zbq\" (UniqueName: \"kubernetes.io/projected/0d05b7e1-a651-404e-89e9-8276427610fc-kube-api-access-79zbq\") pod \"watcher-applier-0\" (UID: \"0d05b7e1-a651-404e-89e9-8276427610fc\") " pod="openstack/watcher-applier-0" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.591058 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d05b7e1-a651-404e-89e9-8276427610fc-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"0d05b7e1-a651-404e-89e9-8276427610fc\") " pod="openstack/watcher-applier-0" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.591181 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79zbq\" (UniqueName: \"kubernetes.io/projected/0d05b7e1-a651-404e-89e9-8276427610fc-kube-api-access-79zbq\") pod \"watcher-applier-0\" (UID: \"0d05b7e1-a651-404e-89e9-8276427610fc\") " pod="openstack/watcher-applier-0" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.591342 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d05b7e1-a651-404e-89e9-8276427610fc-logs\") pod \"watcher-applier-0\" (UID: \"0d05b7e1-a651-404e-89e9-8276427610fc\") " pod="openstack/watcher-applier-0" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.591363 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d05b7e1-a651-404e-89e9-8276427610fc-config-data\") pod \"watcher-applier-0\" (UID: \"0d05b7e1-a651-404e-89e9-8276427610fc\") " pod="openstack/watcher-applier-0" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.592644 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d05b7e1-a651-404e-89e9-8276427610fc-logs\") pod \"watcher-applier-0\" (UID: \"0d05b7e1-a651-404e-89e9-8276427610fc\") " pod="openstack/watcher-applier-0" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.598324 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d05b7e1-a651-404e-89e9-8276427610fc-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"0d05b7e1-a651-404e-89e9-8276427610fc\") " pod="openstack/watcher-applier-0" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.600002 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d05b7e1-a651-404e-89e9-8276427610fc-config-data\") pod \"watcher-applier-0\" (UID: \"0d05b7e1-a651-404e-89e9-8276427610fc\") " pod="openstack/watcher-applier-0" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.611113 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79zbq\" (UniqueName: \"kubernetes.io/projected/0d05b7e1-a651-404e-89e9-8276427610fc-kube-api-access-79zbq\") pod \"watcher-applier-0\" (UID: \"0d05b7e1-a651-404e-89e9-8276427610fc\") " pod="openstack/watcher-applier-0" Mar 20 16:02:20 crc kubenswrapper[4730]: I0320 16:02:20.766739 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 20 16:02:21 crc kubenswrapper[4730]: E0320 16:02:21.149528 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod279d2368_abe1_465a_9007_68542e5dbfc4.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod279d2368_abe1_465a_9007_68542e5dbfc4.slice/crio-da2c214fcdb33fae608237a0fb1d01559481f0b67a4afa1e6c930298a64b75a2\": RecentStats: unable to find data in memory cache]" Mar 20 16:02:21 crc kubenswrapper[4730]: I0320 16:02:21.294110 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Mar 20 16:02:21 crc kubenswrapper[4730]: W0320 16:02:21.297991 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d05b7e1_a651_404e_89e9_8276427610fc.slice/crio-a88aa72f302dc4cb94da7ed72334760b9efcfb8dbdce8e9f4770d7b90a1a9939 WatchSource:0}: Error finding container a88aa72f302dc4cb94da7ed72334760b9efcfb8dbdce8e9f4770d7b90a1a9939: Status 404 returned error can't find the container with id a88aa72f302dc4cb94da7ed72334760b9efcfb8dbdce8e9f4770d7b90a1a9939 Mar 20 16:02:21 crc kubenswrapper[4730]: I0320 16:02:21.355148 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"0d05b7e1-a651-404e-89e9-8276427610fc","Type":"ContainerStarted","Data":"a88aa72f302dc4cb94da7ed72334760b9efcfb8dbdce8e9f4770d7b90a1a9939"} Mar 20 16:02:21 crc kubenswrapper[4730]: I0320 16:02:21.357378 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"ba310e23-3097-4114-8628-4e7ada94eac6","Type":"ContainerStarted","Data":"c956e77723ace7f162ae394dd01ecd3232795058c3ddc5312f368da025e800eb"} Mar 20 16:02:21 crc kubenswrapper[4730]: I0320 16:02:21.387547 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=3.387525492 podStartE2EDuration="3.387525492s" podCreationTimestamp="2026-03-20 16:02:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:02:21.3780454 +0000 UTC m=+1400.591416769" watchObservedRunningTime="2026-03-20 16:02:21.387525492 +0000 UTC m=+1400.600896861" Mar 20 16:02:21 crc kubenswrapper[4730]: I0320 16:02:21.568852 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0" path="/var/lib/kubelet/pods/5bd7fdc7-f9da-4f44-98d3-7b86f541b9f0/volumes" Mar 20 16:02:21 crc kubenswrapper[4730]: I0320 16:02:21.579965 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff335b2a-909a-4c39-a045-2267c73ac8b2" path="/var/lib/kubelet/pods/ff335b2a-909a-4c39-a045-2267c73ac8b2/volumes" Mar 20 16:02:21 crc kubenswrapper[4730]: I0320 16:02:21.828262 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-f7rjc" Mar 20 16:02:22 crc kubenswrapper[4730]: I0320 16:02:22.020435 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f144e50-8d18-49a5-a3ef-84b72e6e119f-config-data\") pod \"8f144e50-8d18-49a5-a3ef-84b72e6e119f\" (UID: \"8f144e50-8d18-49a5-a3ef-84b72e6e119f\") " Mar 20 16:02:22 crc kubenswrapper[4730]: I0320 16:02:22.020530 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dn84\" (UniqueName: \"kubernetes.io/projected/8f144e50-8d18-49a5-a3ef-84b72e6e119f-kube-api-access-4dn84\") pod \"8f144e50-8d18-49a5-a3ef-84b72e6e119f\" (UID: \"8f144e50-8d18-49a5-a3ef-84b72e6e119f\") " Mar 20 16:02:22 crc kubenswrapper[4730]: I0320 16:02:22.020582 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f144e50-8d18-49a5-a3ef-84b72e6e119f-scripts\") pod \"8f144e50-8d18-49a5-a3ef-84b72e6e119f\" (UID: \"8f144e50-8d18-49a5-a3ef-84b72e6e119f\") " Mar 20 16:02:22 crc kubenswrapper[4730]: I0320 16:02:22.020646 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f144e50-8d18-49a5-a3ef-84b72e6e119f-combined-ca-bundle\") pod \"8f144e50-8d18-49a5-a3ef-84b72e6e119f\" (UID: \"8f144e50-8d18-49a5-a3ef-84b72e6e119f\") " Mar 20 16:02:22 crc kubenswrapper[4730]: I0320 16:02:22.025261 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f144e50-8d18-49a5-a3ef-84b72e6e119f-kube-api-access-4dn84" (OuterVolumeSpecName: "kube-api-access-4dn84") pod "8f144e50-8d18-49a5-a3ef-84b72e6e119f" (UID: "8f144e50-8d18-49a5-a3ef-84b72e6e119f"). InnerVolumeSpecName "kube-api-access-4dn84". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:02:22 crc kubenswrapper[4730]: I0320 16:02:22.032665 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f144e50-8d18-49a5-a3ef-84b72e6e119f-scripts" (OuterVolumeSpecName: "scripts") pod "8f144e50-8d18-49a5-a3ef-84b72e6e119f" (UID: "8f144e50-8d18-49a5-a3ef-84b72e6e119f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:22 crc kubenswrapper[4730]: I0320 16:02:22.065364 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f144e50-8d18-49a5-a3ef-84b72e6e119f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f144e50-8d18-49a5-a3ef-84b72e6e119f" (UID: "8f144e50-8d18-49a5-a3ef-84b72e6e119f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:22 crc kubenswrapper[4730]: I0320 16:02:22.065386 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f144e50-8d18-49a5-a3ef-84b72e6e119f-config-data" (OuterVolumeSpecName: "config-data") pod "8f144e50-8d18-49a5-a3ef-84b72e6e119f" (UID: "8f144e50-8d18-49a5-a3ef-84b72e6e119f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:22 crc kubenswrapper[4730]: I0320 16:02:22.123976 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f144e50-8d18-49a5-a3ef-84b72e6e119f-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:22 crc kubenswrapper[4730]: I0320 16:02:22.124020 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f144e50-8d18-49a5-a3ef-84b72e6e119f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:22 crc kubenswrapper[4730]: I0320 16:02:22.124035 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f144e50-8d18-49a5-a3ef-84b72e6e119f-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:22 crc kubenswrapper[4730]: I0320 16:02:22.124046 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dn84\" (UniqueName: \"kubernetes.io/projected/8f144e50-8d18-49a5-a3ef-84b72e6e119f-kube-api-access-4dn84\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:22 crc kubenswrapper[4730]: I0320 16:02:22.370366 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"0d05b7e1-a651-404e-89e9-8276427610fc","Type":"ContainerStarted","Data":"7b930a07c6f095dab4008f80ae7f0a0319d912199b4e910c344255dd178b75f0"} Mar 20 16:02:22 crc kubenswrapper[4730]: I0320 16:02:22.372274 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-f7rjc" event={"ID":"8f144e50-8d18-49a5-a3ef-84b72e6e119f","Type":"ContainerDied","Data":"e3b08dc7c4abd2dcaea34ec7c791ef1e1b1a8fa905aae37ea25e912f633e57a6"} Mar 20 16:02:22 crc kubenswrapper[4730]: I0320 16:02:22.372306 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3b08dc7c4abd2dcaea34ec7c791ef1e1b1a8fa905aae37ea25e912f633e57a6" Mar 20 16:02:22 crc kubenswrapper[4730]: I0320 16:02:22.372303 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-f7rjc" Mar 20 16:02:22 crc kubenswrapper[4730]: I0320 16:02:22.372688 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Mar 20 16:02:22 crc kubenswrapper[4730]: I0320 16:02:22.397950 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=2.397929216 podStartE2EDuration="2.397929216s" podCreationTimestamp="2026-03-20 16:02:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:02:22.386191289 +0000 UTC m=+1401.599562658" watchObservedRunningTime="2026-03-20 16:02:22.397929216 +0000 UTC m=+1401.611300585" Mar 20 16:02:22 crc kubenswrapper[4730]: I0320 16:02:22.523318 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 16:02:22 crc kubenswrapper[4730]: I0320 16:02:22.523568 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="585eb246-c6cd-4641-a7fb-86d2ef87e31e" containerName="nova-api-log" containerID="cri-o://605dab74894b941f8dc47934657056d0f993430ffbd34d8c7cb723a73fb546d4" gracePeriod=30 Mar 20 16:02:22 crc kubenswrapper[4730]: I0320 16:02:22.523710 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="585eb246-c6cd-4641-a7fb-86d2ef87e31e" containerName="nova-api-api" containerID="cri-o://077bbf75eaae181c0e1146dfa3d57949104bdbba5a29e133956ce977816562e9" gracePeriod=30 Mar 20 16:02:22 crc kubenswrapper[4730]: I0320 16:02:22.570864 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 16:02:22 crc kubenswrapper[4730]: I0320 16:02:22.571087 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="01c93007-4d31-47a2-810c-819caf917e43" containerName="nova-scheduler-scheduler" containerID="cri-o://68063e072de62260e0731b15d4d132f06b3322eede18b910d69b064464dffc8d" gracePeriod=30 Mar 20 16:02:22 crc kubenswrapper[4730]: I0320 16:02:22.585234 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 16:02:22 crc kubenswrapper[4730]: I0320 16:02:22.585514 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="23319f08-4294-49cc-bb24-7a01520e37c6" containerName="nova-metadata-log" containerID="cri-o://a0c082b53bef3328cda841db37115f58cfc064af02b9d184944305bba914d52b" gracePeriod=30 Mar 20 16:02:22 crc kubenswrapper[4730]: I0320 16:02:22.586038 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="23319f08-4294-49cc-bb24-7a01520e37c6" containerName="nova-metadata-metadata" containerID="cri-o://a527ffbd2a54fec297ae5f5968aa11134c7ecf139f953464b50d8aaf8a48e15d" gracePeriod=30 Mar 20 16:02:23 crc kubenswrapper[4730]: I0320 16:02:23.390158 4730 generic.go:334] "Generic (PLEG): container finished" podID="585eb246-c6cd-4641-a7fb-86d2ef87e31e" containerID="605dab74894b941f8dc47934657056d0f993430ffbd34d8c7cb723a73fb546d4" exitCode=143 Mar 20 16:02:23 crc kubenswrapper[4730]: I0320 16:02:23.390435 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"585eb246-c6cd-4641-a7fb-86d2ef87e31e","Type":"ContainerDied","Data":"605dab74894b941f8dc47934657056d0f993430ffbd34d8c7cb723a73fb546d4"} Mar 20 16:02:23 crc kubenswrapper[4730]: I0320 16:02:23.392931 4730 generic.go:334] "Generic (PLEG): container finished" podID="23319f08-4294-49cc-bb24-7a01520e37c6" containerID="a527ffbd2a54fec297ae5f5968aa11134c7ecf139f953464b50d8aaf8a48e15d" exitCode=0 Mar 20 16:02:23 crc kubenswrapper[4730]: I0320 16:02:23.392948 4730 generic.go:334] "Generic (PLEG): container finished" podID="23319f08-4294-49cc-bb24-7a01520e37c6" containerID="a0c082b53bef3328cda841db37115f58cfc064af02b9d184944305bba914d52b" exitCode=143 Mar 20 16:02:23 crc kubenswrapper[4730]: I0320 16:02:23.393012 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"23319f08-4294-49cc-bb24-7a01520e37c6","Type":"ContainerDied","Data":"a527ffbd2a54fec297ae5f5968aa11134c7ecf139f953464b50d8aaf8a48e15d"} Mar 20 16:02:23 crc kubenswrapper[4730]: I0320 16:02:23.393038 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"23319f08-4294-49cc-bb24-7a01520e37c6","Type":"ContainerDied","Data":"a0c082b53bef3328cda841db37115f58cfc064af02b9d184944305bba914d52b"} Mar 20 16:02:23 crc kubenswrapper[4730]: I0320 16:02:23.561138 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 16:02:23 crc kubenswrapper[4730]: E0320 16:02:23.568911 4730 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="68063e072de62260e0731b15d4d132f06b3322eede18b910d69b064464dffc8d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 16:02:23 crc kubenswrapper[4730]: E0320 16:02:23.571450 4730 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="68063e072de62260e0731b15d4d132f06b3322eede18b910d69b064464dffc8d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 16:02:23 crc kubenswrapper[4730]: E0320 16:02:23.577674 4730 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="68063e072de62260e0731b15d4d132f06b3322eede18b910d69b064464dffc8d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 16:02:23 crc kubenswrapper[4730]: E0320 16:02:23.577792 4730 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="01c93007-4d31-47a2-810c-819caf917e43" containerName="nova-scheduler-scheduler" Mar 20 16:02:23 crc kubenswrapper[4730]: I0320 16:02:23.623989 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Mar 20 16:02:23 crc kubenswrapper[4730]: I0320 16:02:23.658632 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/23319f08-4294-49cc-bb24-7a01520e37c6-nova-metadata-tls-certs\") pod \"23319f08-4294-49cc-bb24-7a01520e37c6\" (UID: \"23319f08-4294-49cc-bb24-7a01520e37c6\") " Mar 20 16:02:23 crc kubenswrapper[4730]: I0320 16:02:23.658780 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23319f08-4294-49cc-bb24-7a01520e37c6-config-data\") pod \"23319f08-4294-49cc-bb24-7a01520e37c6\" (UID: \"23319f08-4294-49cc-bb24-7a01520e37c6\") " Mar 20 16:02:23 crc kubenswrapper[4730]: I0320 16:02:23.658950 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23319f08-4294-49cc-bb24-7a01520e37c6-combined-ca-bundle\") pod \"23319f08-4294-49cc-bb24-7a01520e37c6\" (UID: \"23319f08-4294-49cc-bb24-7a01520e37c6\") " Mar 20 16:02:23 crc kubenswrapper[4730]: I0320 16:02:23.659014 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ctt8\" (UniqueName: \"kubernetes.io/projected/23319f08-4294-49cc-bb24-7a01520e37c6-kube-api-access-7ctt8\") pod \"23319f08-4294-49cc-bb24-7a01520e37c6\" (UID: \"23319f08-4294-49cc-bb24-7a01520e37c6\") " Mar 20 16:02:23 crc kubenswrapper[4730]: I0320 16:02:23.659047 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23319f08-4294-49cc-bb24-7a01520e37c6-logs\") pod \"23319f08-4294-49cc-bb24-7a01520e37c6\" (UID: \"23319f08-4294-49cc-bb24-7a01520e37c6\") " Mar 20 16:02:23 crc kubenswrapper[4730]: I0320 16:02:23.659838 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23319f08-4294-49cc-bb24-7a01520e37c6-logs" (OuterVolumeSpecName: "logs") pod "23319f08-4294-49cc-bb24-7a01520e37c6" (UID: "23319f08-4294-49cc-bb24-7a01520e37c6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:02:23 crc kubenswrapper[4730]: I0320 16:02:23.661060 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23319f08-4294-49cc-bb24-7a01520e37c6-logs\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:23 crc kubenswrapper[4730]: I0320 16:02:23.665675 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23319f08-4294-49cc-bb24-7a01520e37c6-kube-api-access-7ctt8" (OuterVolumeSpecName: "kube-api-access-7ctt8") pod "23319f08-4294-49cc-bb24-7a01520e37c6" (UID: "23319f08-4294-49cc-bb24-7a01520e37c6"). InnerVolumeSpecName "kube-api-access-7ctt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:02:23 crc kubenswrapper[4730]: I0320 16:02:23.696628 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23319f08-4294-49cc-bb24-7a01520e37c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23319f08-4294-49cc-bb24-7a01520e37c6" (UID: "23319f08-4294-49cc-bb24-7a01520e37c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:23 crc kubenswrapper[4730]: I0320 16:02:23.698401 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23319f08-4294-49cc-bb24-7a01520e37c6-config-data" (OuterVolumeSpecName: "config-data") pod "23319f08-4294-49cc-bb24-7a01520e37c6" (UID: "23319f08-4294-49cc-bb24-7a01520e37c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:23 crc kubenswrapper[4730]: I0320 16:02:23.716938 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23319f08-4294-49cc-bb24-7a01520e37c6-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "23319f08-4294-49cc-bb24-7a01520e37c6" (UID: "23319f08-4294-49cc-bb24-7a01520e37c6"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:23 crc kubenswrapper[4730]: I0320 16:02:23.762513 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23319f08-4294-49cc-bb24-7a01520e37c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:23 crc kubenswrapper[4730]: I0320 16:02:23.762544 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ctt8\" (UniqueName: \"kubernetes.io/projected/23319f08-4294-49cc-bb24-7a01520e37c6-kube-api-access-7ctt8\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:23 crc kubenswrapper[4730]: I0320 16:02:23.762558 4730 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/23319f08-4294-49cc-bb24-7a01520e37c6-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:23 crc kubenswrapper[4730]: I0320 16:02:23.762611 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23319f08-4294-49cc-bb24-7a01520e37c6-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.404157 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"23319f08-4294-49cc-bb24-7a01520e37c6","Type":"ContainerDied","Data":"1a4e509ba032eff859c3dcc5260a9f2631d938aec766c95550415d37207eae1a"} Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.404207 4730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.404183 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.404244 4730 scope.go:117] "RemoveContainer" containerID="a527ffbd2a54fec297ae5f5968aa11134c7ecf139f953464b50d8aaf8a48e15d" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.458409 4730 scope.go:117] "RemoveContainer" containerID="a0c082b53bef3328cda841db37115f58cfc064af02b9d184944305bba914d52b" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.555031 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.580891 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.597852 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 20 16:02:24 crc kubenswrapper[4730]: E0320 16:02:24.598721 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23319f08-4294-49cc-bb24-7a01520e37c6" containerName="nova-metadata-log" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.598746 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="23319f08-4294-49cc-bb24-7a01520e37c6" containerName="nova-metadata-log" Mar 20 16:02:24 crc kubenswrapper[4730]: E0320 16:02:24.598765 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23319f08-4294-49cc-bb24-7a01520e37c6" containerName="nova-metadata-metadata" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.598772 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="23319f08-4294-49cc-bb24-7a01520e37c6" containerName="nova-metadata-metadata" Mar 20 16:02:24 crc kubenswrapper[4730]: E0320 16:02:24.598793 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f144e50-8d18-49a5-a3ef-84b72e6e119f" containerName="nova-manage" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.598801 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f144e50-8d18-49a5-a3ef-84b72e6e119f" containerName="nova-manage" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.599189 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f144e50-8d18-49a5-a3ef-84b72e6e119f" containerName="nova-manage" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.599218 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="23319f08-4294-49cc-bb24-7a01520e37c6" containerName="nova-metadata-log" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.599238 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="23319f08-4294-49cc-bb24-7a01520e37c6" containerName="nova-metadata-metadata" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.622159 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.622298 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.624596 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.624899 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.797349 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-config-data\") pod \"nova-metadata-0\" (UID: \"c9bfb6c0-4971-4a58-aacc-17636a95b8a4\") " pod="openstack/nova-metadata-0" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.797400 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-logs\") pod \"nova-metadata-0\" (UID: \"c9bfb6c0-4971-4a58-aacc-17636a95b8a4\") " pod="openstack/nova-metadata-0" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.797483 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c9bfb6c0-4971-4a58-aacc-17636a95b8a4\") " pod="openstack/nova-metadata-0" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.797556 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c9bfb6c0-4971-4a58-aacc-17636a95b8a4\") " pod="openstack/nova-metadata-0" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.797673 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xws9p\" (UniqueName: \"kubernetes.io/projected/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-kube-api-access-xws9p\") pod \"nova-metadata-0\" (UID: \"c9bfb6c0-4971-4a58-aacc-17636a95b8a4\") " pod="openstack/nova-metadata-0" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.899401 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xws9p\" (UniqueName: \"kubernetes.io/projected/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-kube-api-access-xws9p\") pod \"nova-metadata-0\" (UID: \"c9bfb6c0-4971-4a58-aacc-17636a95b8a4\") " pod="openstack/nova-metadata-0" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.899516 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-config-data\") pod \"nova-metadata-0\" (UID: \"c9bfb6c0-4971-4a58-aacc-17636a95b8a4\") " pod="openstack/nova-metadata-0" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.899545 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-logs\") pod \"nova-metadata-0\" (UID: \"c9bfb6c0-4971-4a58-aacc-17636a95b8a4\") " pod="openstack/nova-metadata-0" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.899606 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c9bfb6c0-4971-4a58-aacc-17636a95b8a4\") " pod="openstack/nova-metadata-0" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.899664 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c9bfb6c0-4971-4a58-aacc-17636a95b8a4\") " pod="openstack/nova-metadata-0" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.901118 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-logs\") pod \"nova-metadata-0\" (UID: \"c9bfb6c0-4971-4a58-aacc-17636a95b8a4\") " pod="openstack/nova-metadata-0" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.907138 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c9bfb6c0-4971-4a58-aacc-17636a95b8a4\") " pod="openstack/nova-metadata-0" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.907602 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c9bfb6c0-4971-4a58-aacc-17636a95b8a4\") " pod="openstack/nova-metadata-0" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.909041 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-config-data\") pod \"nova-metadata-0\" (UID: \"c9bfb6c0-4971-4a58-aacc-17636a95b8a4\") " pod="openstack/nova-metadata-0" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.918278 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xws9p\" (UniqueName: \"kubernetes.io/projected/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-kube-api-access-xws9p\") pod \"nova-metadata-0\" (UID: \"c9bfb6c0-4971-4a58-aacc-17636a95b8a4\") " pod="openstack/nova-metadata-0" Mar 20 16:02:24 crc kubenswrapper[4730]: I0320 16:02:24.947366 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.086985 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.205480 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/585eb246-c6cd-4641-a7fb-86d2ef87e31e-config-data\") pod \"585eb246-c6cd-4641-a7fb-86d2ef87e31e\" (UID: \"585eb246-c6cd-4641-a7fb-86d2ef87e31e\") " Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.205713 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/585eb246-c6cd-4641-a7fb-86d2ef87e31e-logs\") pod \"585eb246-c6cd-4641-a7fb-86d2ef87e31e\" (UID: \"585eb246-c6cd-4641-a7fb-86d2ef87e31e\") " Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.205757 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9r8b\" (UniqueName: \"kubernetes.io/projected/585eb246-c6cd-4641-a7fb-86d2ef87e31e-kube-api-access-l9r8b\") pod \"585eb246-c6cd-4641-a7fb-86d2ef87e31e\" (UID: \"585eb246-c6cd-4641-a7fb-86d2ef87e31e\") " Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.205831 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/585eb246-c6cd-4641-a7fb-86d2ef87e31e-combined-ca-bundle\") pod \"585eb246-c6cd-4641-a7fb-86d2ef87e31e\" (UID: \"585eb246-c6cd-4641-a7fb-86d2ef87e31e\") " Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.206541 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/585eb246-c6cd-4641-a7fb-86d2ef87e31e-logs" (OuterVolumeSpecName: "logs") pod "585eb246-c6cd-4641-a7fb-86d2ef87e31e" (UID: "585eb246-c6cd-4641-a7fb-86d2ef87e31e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.212689 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/585eb246-c6cd-4641-a7fb-86d2ef87e31e-kube-api-access-l9r8b" (OuterVolumeSpecName: "kube-api-access-l9r8b") pod "585eb246-c6cd-4641-a7fb-86d2ef87e31e" (UID: "585eb246-c6cd-4641-a7fb-86d2ef87e31e"). InnerVolumeSpecName "kube-api-access-l9r8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.243412 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/585eb246-c6cd-4641-a7fb-86d2ef87e31e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "585eb246-c6cd-4641-a7fb-86d2ef87e31e" (UID: "585eb246-c6cd-4641-a7fb-86d2ef87e31e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.248638 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/585eb246-c6cd-4641-a7fb-86d2ef87e31e-config-data" (OuterVolumeSpecName: "config-data") pod "585eb246-c6cd-4641-a7fb-86d2ef87e31e" (UID: "585eb246-c6cd-4641-a7fb-86d2ef87e31e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.308436 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/585eb246-c6cd-4641-a7fb-86d2ef87e31e-logs\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.308483 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9r8b\" (UniqueName: \"kubernetes.io/projected/585eb246-c6cd-4641-a7fb-86d2ef87e31e-kube-api-access-l9r8b\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.308499 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/585eb246-c6cd-4641-a7fb-86d2ef87e31e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.308511 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/585eb246-c6cd-4641-a7fb-86d2ef87e31e-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.337189 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.415012 4730 generic.go:334] "Generic (PLEG): container finished" podID="585eb246-c6cd-4641-a7fb-86d2ef87e31e" containerID="077bbf75eaae181c0e1146dfa3d57949104bdbba5a29e133956ce977816562e9" exitCode=0 Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.415070 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"585eb246-c6cd-4641-a7fb-86d2ef87e31e","Type":"ContainerDied","Data":"077bbf75eaae181c0e1146dfa3d57949104bdbba5a29e133956ce977816562e9"} Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.415096 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"585eb246-c6cd-4641-a7fb-86d2ef87e31e","Type":"ContainerDied","Data":"b2e0e39bb6caaa7438ca18c26b9795ce5981a3e0bae88697a3af45f446cad74c"} Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.415114 4730 scope.go:117] "RemoveContainer" containerID="077bbf75eaae181c0e1146dfa3d57949104bdbba5a29e133956ce977816562e9" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.415221 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.474439 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 16:02:25 crc kubenswrapper[4730]: W0320 16:02:25.501790 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9bfb6c0_4971_4a58_aacc_17636a95b8a4.slice/crio-26008e0cde0b10d5cf635c86bad21a3a5eba1afc3f588c8674fe357301fcf0ca WatchSource:0}: Error finding container 26008e0cde0b10d5cf635c86bad21a3a5eba1afc3f588c8674fe357301fcf0ca: Status 404 returned error can't find the container with id 26008e0cde0b10d5cf635c86bad21a3a5eba1afc3f588c8674fe357301fcf0ca Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.505467 4730 scope.go:117] "RemoveContainer" containerID="605dab74894b941f8dc47934657056d0f993430ffbd34d8c7cb723a73fb546d4" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.517328 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.531265 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.578798 4730 scope.go:117] "RemoveContainer" containerID="077bbf75eaae181c0e1146dfa3d57949104bdbba5a29e133956ce977816562e9" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.581240 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23319f08-4294-49cc-bb24-7a01520e37c6" path="/var/lib/kubelet/pods/23319f08-4294-49cc-bb24-7a01520e37c6/volumes" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.582160 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="585eb246-c6cd-4641-a7fb-86d2ef87e31e" path="/var/lib/kubelet/pods/585eb246-c6cd-4641-a7fb-86d2ef87e31e/volumes" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.582776 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 20 16:02:25 crc kubenswrapper[4730]: E0320 16:02:25.583105 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585eb246-c6cd-4641-a7fb-86d2ef87e31e" containerName="nova-api-api" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.583121 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="585eb246-c6cd-4641-a7fb-86d2ef87e31e" containerName="nova-api-api" Mar 20 16:02:25 crc kubenswrapper[4730]: E0320 16:02:25.583145 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585eb246-c6cd-4641-a7fb-86d2ef87e31e" containerName="nova-api-log" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.583152 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="585eb246-c6cd-4641-a7fb-86d2ef87e31e" containerName="nova-api-log" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.583396 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="585eb246-c6cd-4641-a7fb-86d2ef87e31e" containerName="nova-api-log" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.583428 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="585eb246-c6cd-4641-a7fb-86d2ef87e31e" containerName="nova-api-api" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.584436 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 16:02:25 crc kubenswrapper[4730]: E0320 16:02:25.584942 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"077bbf75eaae181c0e1146dfa3d57949104bdbba5a29e133956ce977816562e9\": container with ID starting with 077bbf75eaae181c0e1146dfa3d57949104bdbba5a29e133956ce977816562e9 not found: ID does not exist" containerID="077bbf75eaae181c0e1146dfa3d57949104bdbba5a29e133956ce977816562e9" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.584995 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"077bbf75eaae181c0e1146dfa3d57949104bdbba5a29e133956ce977816562e9"} err="failed to get container status \"077bbf75eaae181c0e1146dfa3d57949104bdbba5a29e133956ce977816562e9\": rpc error: code = NotFound desc = could not find container \"077bbf75eaae181c0e1146dfa3d57949104bdbba5a29e133956ce977816562e9\": container with ID starting with 077bbf75eaae181c0e1146dfa3d57949104bdbba5a29e133956ce977816562e9 not found: ID does not exist" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.585026 4730 scope.go:117] "RemoveContainer" containerID="605dab74894b941f8dc47934657056d0f993430ffbd34d8c7cb723a73fb546d4" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.586756 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 20 16:02:25 crc kubenswrapper[4730]: E0320 16:02:25.588386 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"605dab74894b941f8dc47934657056d0f993430ffbd34d8c7cb723a73fb546d4\": container with ID starting with 605dab74894b941f8dc47934657056d0f993430ffbd34d8c7cb723a73fb546d4 not found: ID does not exist" containerID="605dab74894b941f8dc47934657056d0f993430ffbd34d8c7cb723a73fb546d4" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.588470 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"605dab74894b941f8dc47934657056d0f993430ffbd34d8c7cb723a73fb546d4"} err="failed to get container status \"605dab74894b941f8dc47934657056d0f993430ffbd34d8c7cb723a73fb546d4\": rpc error: code = NotFound desc = could not find container \"605dab74894b941f8dc47934657056d0f993430ffbd34d8c7cb723a73fb546d4\": container with ID starting with 605dab74894b941f8dc47934657056d0f993430ffbd34d8c7cb723a73fb546d4 not found: ID does not exist" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.590341 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.719639 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm4nm\" (UniqueName: \"kubernetes.io/projected/fc78da88-5699-44ed-af14-7627ea6191f9-kube-api-access-gm4nm\") pod \"nova-api-0\" (UID: \"fc78da88-5699-44ed-af14-7627ea6191f9\") " pod="openstack/nova-api-0" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.719971 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc78da88-5699-44ed-af14-7627ea6191f9-logs\") pod \"nova-api-0\" (UID: \"fc78da88-5699-44ed-af14-7627ea6191f9\") " pod="openstack/nova-api-0" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.720001 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc78da88-5699-44ed-af14-7627ea6191f9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fc78da88-5699-44ed-af14-7627ea6191f9\") " pod="openstack/nova-api-0" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.720042 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc78da88-5699-44ed-af14-7627ea6191f9-config-data\") pod \"nova-api-0\" (UID: \"fc78da88-5699-44ed-af14-7627ea6191f9\") " pod="openstack/nova-api-0" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.767655 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.821800 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm4nm\" (UniqueName: \"kubernetes.io/projected/fc78da88-5699-44ed-af14-7627ea6191f9-kube-api-access-gm4nm\") pod \"nova-api-0\" (UID: \"fc78da88-5699-44ed-af14-7627ea6191f9\") " pod="openstack/nova-api-0" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.821855 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc78da88-5699-44ed-af14-7627ea6191f9-logs\") pod \"nova-api-0\" (UID: \"fc78da88-5699-44ed-af14-7627ea6191f9\") " pod="openstack/nova-api-0" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.821886 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc78da88-5699-44ed-af14-7627ea6191f9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fc78da88-5699-44ed-af14-7627ea6191f9\") " pod="openstack/nova-api-0" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.821935 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc78da88-5699-44ed-af14-7627ea6191f9-config-data\") pod \"nova-api-0\" (UID: \"fc78da88-5699-44ed-af14-7627ea6191f9\") " pod="openstack/nova-api-0" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.822432 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc78da88-5699-44ed-af14-7627ea6191f9-logs\") pod \"nova-api-0\" (UID: \"fc78da88-5699-44ed-af14-7627ea6191f9\") " pod="openstack/nova-api-0" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.827810 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc78da88-5699-44ed-af14-7627ea6191f9-config-data\") pod \"nova-api-0\" (UID: \"fc78da88-5699-44ed-af14-7627ea6191f9\") " pod="openstack/nova-api-0" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.827889 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc78da88-5699-44ed-af14-7627ea6191f9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fc78da88-5699-44ed-af14-7627ea6191f9\") " pod="openstack/nova-api-0" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.839446 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm4nm\" (UniqueName: \"kubernetes.io/projected/fc78da88-5699-44ed-af14-7627ea6191f9-kube-api-access-gm4nm\") pod \"nova-api-0\" (UID: \"fc78da88-5699-44ed-af14-7627ea6191f9\") " pod="openstack/nova-api-0" Mar 20 16:02:25 crc kubenswrapper[4730]: I0320 16:02:25.916811 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 16:02:26 crc kubenswrapper[4730]: I0320 16:02:26.184067 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 16:02:26 crc kubenswrapper[4730]: W0320 16:02:26.185508 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc78da88_5699_44ed_af14_7627ea6191f9.slice/crio-bcfa21becb5d8093cfdbab459412302a96e0a7cd61a8f2880830161d043b5f7d WatchSource:0}: Error finding container bcfa21becb5d8093cfdbab459412302a96e0a7cd61a8f2880830161d043b5f7d: Status 404 returned error can't find the container with id bcfa21becb5d8093cfdbab459412302a96e0a7cd61a8f2880830161d043b5f7d Mar 20 16:02:26 crc kubenswrapper[4730]: I0320 16:02:26.436837 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c9bfb6c0-4971-4a58-aacc-17636a95b8a4","Type":"ContainerStarted","Data":"4ea91441433de2c9d6a90ef65cf2c1e62d0cba102e976dcb6c1e329ad4173188"} Mar 20 16:02:26 crc kubenswrapper[4730]: I0320 16:02:26.436880 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c9bfb6c0-4971-4a58-aacc-17636a95b8a4","Type":"ContainerStarted","Data":"a81bd5c7f3aa448b067be4239c5895b4e08b8ed1cbb446ef78371ac5389a73a9"} Mar 20 16:02:26 crc kubenswrapper[4730]: I0320 16:02:26.436890 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c9bfb6c0-4971-4a58-aacc-17636a95b8a4","Type":"ContainerStarted","Data":"26008e0cde0b10d5cf635c86bad21a3a5eba1afc3f588c8674fe357301fcf0ca"} Mar 20 16:02:26 crc kubenswrapper[4730]: I0320 16:02:26.440613 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc78da88-5699-44ed-af14-7627ea6191f9","Type":"ContainerStarted","Data":"42f9304ba2a3640bc2a608f72633df51d75830859b8b512c72f883ee6ebbc449"} Mar 20 16:02:26 crc kubenswrapper[4730]: I0320 16:02:26.440653 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc78da88-5699-44ed-af14-7627ea6191f9","Type":"ContainerStarted","Data":"bcfa21becb5d8093cfdbab459412302a96e0a7cd61a8f2880830161d043b5f7d"} Mar 20 16:02:26 crc kubenswrapper[4730]: I0320 16:02:26.442039 4730 generic.go:334] "Generic (PLEG): container finished" podID="ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647" containerID="78840c5174380df1ec37853cf5867820b6c6e093e27294e16eb3795a80e9c2a8" exitCode=0 Mar 20 16:02:26 crc kubenswrapper[4730]: I0320 16:02:26.442079 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cdhdz" event={"ID":"ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647","Type":"ContainerDied","Data":"78840c5174380df1ec37853cf5867820b6c6e093e27294e16eb3795a80e9c2a8"} Mar 20 16:02:26 crc kubenswrapper[4730]: I0320 16:02:26.467673 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.46765321 podStartE2EDuration="2.46765321s" podCreationTimestamp="2026-03-20 16:02:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:02:26.460257348 +0000 UTC m=+1405.673628727" watchObservedRunningTime="2026-03-20 16:02:26.46765321 +0000 UTC m=+1405.681024569" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.127173 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.260743 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c93007-4d31-47a2-810c-819caf917e43-combined-ca-bundle\") pod \"01c93007-4d31-47a2-810c-819caf917e43\" (UID: \"01c93007-4d31-47a2-810c-819caf917e43\") " Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.260972 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlxxq\" (UniqueName: \"kubernetes.io/projected/01c93007-4d31-47a2-810c-819caf917e43-kube-api-access-nlxxq\") pod \"01c93007-4d31-47a2-810c-819caf917e43\" (UID: \"01c93007-4d31-47a2-810c-819caf917e43\") " Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.261073 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01c93007-4d31-47a2-810c-819caf917e43-config-data\") pod \"01c93007-4d31-47a2-810c-819caf917e43\" (UID: \"01c93007-4d31-47a2-810c-819caf917e43\") " Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.273633 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01c93007-4d31-47a2-810c-819caf917e43-kube-api-access-nlxxq" (OuterVolumeSpecName: "kube-api-access-nlxxq") pod "01c93007-4d31-47a2-810c-819caf917e43" (UID: "01c93007-4d31-47a2-810c-819caf917e43"). InnerVolumeSpecName "kube-api-access-nlxxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.293573 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01c93007-4d31-47a2-810c-819caf917e43-config-data" (OuterVolumeSpecName: "config-data") pod "01c93007-4d31-47a2-810c-819caf917e43" (UID: "01c93007-4d31-47a2-810c-819caf917e43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.297014 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01c93007-4d31-47a2-810c-819caf917e43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01c93007-4d31-47a2-810c-819caf917e43" (UID: "01c93007-4d31-47a2-810c-819caf917e43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.363487 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c93007-4d31-47a2-810c-819caf917e43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.363530 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlxxq\" (UniqueName: \"kubernetes.io/projected/01c93007-4d31-47a2-810c-819caf917e43-kube-api-access-nlxxq\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.363547 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01c93007-4d31-47a2-810c-819caf917e43-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.454342 4730 generic.go:334] "Generic (PLEG): container finished" podID="01c93007-4d31-47a2-810c-819caf917e43" containerID="68063e072de62260e0731b15d4d132f06b3322eede18b910d69b064464dffc8d" exitCode=0 Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.454418 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"01c93007-4d31-47a2-810c-819caf917e43","Type":"ContainerDied","Data":"68063e072de62260e0731b15d4d132f06b3322eede18b910d69b064464dffc8d"} Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.454451 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"01c93007-4d31-47a2-810c-819caf917e43","Type":"ContainerDied","Data":"3250edd54dccb12be75e701aadcaf4d148acb76b38a455ec3944d6f9f7d0f0e8"} Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.454473 4730 scope.go:117] "RemoveContainer" containerID="68063e072de62260e0731b15d4d132f06b3322eede18b910d69b064464dffc8d" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.454615 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.461570 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc78da88-5699-44ed-af14-7627ea6191f9","Type":"ContainerStarted","Data":"b7766e5f23e174441651086dc567c68965531e447de145ebeb0b1414c28ef4e6"} Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.488683 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.488661348 podStartE2EDuration="2.488661348s" podCreationTimestamp="2026-03-20 16:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:02:27.486496856 +0000 UTC m=+1406.699868225" watchObservedRunningTime="2026-03-20 16:02:27.488661348 +0000 UTC m=+1406.702032717" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.491783 4730 scope.go:117] "RemoveContainer" containerID="68063e072de62260e0731b15d4d132f06b3322eede18b910d69b064464dffc8d" Mar 20 16:02:27 crc kubenswrapper[4730]: E0320 16:02:27.492687 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68063e072de62260e0731b15d4d132f06b3322eede18b910d69b064464dffc8d\": container with ID starting with 68063e072de62260e0731b15d4d132f06b3322eede18b910d69b064464dffc8d not found: ID does not exist" containerID="68063e072de62260e0731b15d4d132f06b3322eede18b910d69b064464dffc8d" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.492953 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68063e072de62260e0731b15d4d132f06b3322eede18b910d69b064464dffc8d"} err="failed to get container status \"68063e072de62260e0731b15d4d132f06b3322eede18b910d69b064464dffc8d\": rpc error: code = NotFound desc = could not find container \"68063e072de62260e0731b15d4d132f06b3322eede18b910d69b064464dffc8d\": container with ID starting with 68063e072de62260e0731b15d4d132f06b3322eede18b910d69b064464dffc8d not found: ID does not exist" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.518195 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.542522 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.563502 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 16:02:27 crc kubenswrapper[4730]: E0320 16:02:27.564218 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01c93007-4d31-47a2-810c-819caf917e43" containerName="nova-scheduler-scheduler" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.564376 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c93007-4d31-47a2-810c-819caf917e43" containerName="nova-scheduler-scheduler" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.564639 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="01c93007-4d31-47a2-810c-819caf917e43" containerName="nova-scheduler-scheduler" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.565438 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.569896 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.584611 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.671099 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3edc02b8-2451-4edf-a79d-fc86a078de83-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3edc02b8-2451-4edf-a79d-fc86a078de83\") " pod="openstack/nova-scheduler-0" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.671157 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrnlf\" (UniqueName: \"kubernetes.io/projected/3edc02b8-2451-4edf-a79d-fc86a078de83-kube-api-access-qrnlf\") pod \"nova-scheduler-0\" (UID: \"3edc02b8-2451-4edf-a79d-fc86a078de83\") " pod="openstack/nova-scheduler-0" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.671271 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3edc02b8-2451-4edf-a79d-fc86a078de83-config-data\") pod \"nova-scheduler-0\" (UID: \"3edc02b8-2451-4edf-a79d-fc86a078de83\") " pod="openstack/nova-scheduler-0" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.771058 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cdhdz" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.773064 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3edc02b8-2451-4edf-a79d-fc86a078de83-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3edc02b8-2451-4edf-a79d-fc86a078de83\") " pod="openstack/nova-scheduler-0" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.773115 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrnlf\" (UniqueName: \"kubernetes.io/projected/3edc02b8-2451-4edf-a79d-fc86a078de83-kube-api-access-qrnlf\") pod \"nova-scheduler-0\" (UID: \"3edc02b8-2451-4edf-a79d-fc86a078de83\") " pod="openstack/nova-scheduler-0" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.773183 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3edc02b8-2451-4edf-a79d-fc86a078de83-config-data\") pod \"nova-scheduler-0\" (UID: \"3edc02b8-2451-4edf-a79d-fc86a078de83\") " pod="openstack/nova-scheduler-0" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.777133 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3edc02b8-2451-4edf-a79d-fc86a078de83-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3edc02b8-2451-4edf-a79d-fc86a078de83\") " pod="openstack/nova-scheduler-0" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.781699 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3edc02b8-2451-4edf-a79d-fc86a078de83-config-data\") pod \"nova-scheduler-0\" (UID: \"3edc02b8-2451-4edf-a79d-fc86a078de83\") " pod="openstack/nova-scheduler-0" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.790004 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrnlf\" (UniqueName: \"kubernetes.io/projected/3edc02b8-2451-4edf-a79d-fc86a078de83-kube-api-access-qrnlf\") pod \"nova-scheduler-0\" (UID: \"3edc02b8-2451-4edf-a79d-fc86a078de83\") " pod="openstack/nova-scheduler-0" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.874287 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647-combined-ca-bundle\") pod \"ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647\" (UID: \"ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647\") " Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.874435 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnxln\" (UniqueName: \"kubernetes.io/projected/ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647-kube-api-access-jnxln\") pod \"ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647\" (UID: \"ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647\") " Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.874490 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647-config-data\") pod \"ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647\" (UID: \"ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647\") " Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.874876 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647-scripts\") pod \"ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647\" (UID: \"ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647\") " Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.878609 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647-kube-api-access-jnxln" (OuterVolumeSpecName: "kube-api-access-jnxln") pod "ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647" (UID: "ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647"). InnerVolumeSpecName "kube-api-access-jnxln". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.879208 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647-scripts" (OuterVolumeSpecName: "scripts") pod "ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647" (UID: "ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.889995 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.902931 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647-config-data" (OuterVolumeSpecName: "config-data") pod "ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647" (UID: "ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.903521 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647" (UID: "ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.980590 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnxln\" (UniqueName: \"kubernetes.io/projected/ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647-kube-api-access-jnxln\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.980617 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.980627 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:27 crc kubenswrapper[4730]: I0320 16:02:27.980635 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:28 crc kubenswrapper[4730]: W0320 16:02:28.360223 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3edc02b8_2451_4edf_a79d_fc86a078de83.slice/crio-b529fd31ef154a115120a0df54b6ded5ddefe9a8e3d6d2b8b099bc3bd1c28517 WatchSource:0}: Error finding container b529fd31ef154a115120a0df54b6ded5ddefe9a8e3d6d2b8b099bc3bd1c28517: Status 404 returned error can't find the container with id b529fd31ef154a115120a0df54b6ded5ddefe9a8e3d6d2b8b099bc3bd1c28517 Mar 20 16:02:28 crc kubenswrapper[4730]: I0320 16:02:28.365431 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 16:02:28 crc kubenswrapper[4730]: I0320 16:02:28.475382 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cdhdz" event={"ID":"ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647","Type":"ContainerDied","Data":"d9f3d982f9ee881581e688def58986a8c7c62dacb4325c11bdceadbe4b6ffa1a"} Mar 20 16:02:28 crc kubenswrapper[4730]: I0320 16:02:28.475449 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9f3d982f9ee881581e688def58986a8c7c62dacb4325c11bdceadbe4b6ffa1a" Mar 20 16:02:28 crc kubenswrapper[4730]: I0320 16:02:28.475416 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cdhdz" Mar 20 16:02:28 crc kubenswrapper[4730]: I0320 16:02:28.477427 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3edc02b8-2451-4edf-a79d-fc86a078de83","Type":"ContainerStarted","Data":"b529fd31ef154a115120a0df54b6ded5ddefe9a8e3d6d2b8b099bc3bd1c28517"} Mar 20 16:02:28 crc kubenswrapper[4730]: I0320 16:02:28.581816 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 20 16:02:28 crc kubenswrapper[4730]: E0320 16:02:28.587821 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647" containerName="nova-cell1-conductor-db-sync" Mar 20 16:02:28 crc kubenswrapper[4730]: I0320 16:02:28.587867 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647" containerName="nova-cell1-conductor-db-sync" Mar 20 16:02:28 crc kubenswrapper[4730]: I0320 16:02:28.588199 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647" containerName="nova-cell1-conductor-db-sync" Mar 20 16:02:28 crc kubenswrapper[4730]: I0320 16:02:28.588931 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 20 16:02:28 crc kubenswrapper[4730]: I0320 16:02:28.594047 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 20 16:02:28 crc kubenswrapper[4730]: I0320 16:02:28.599615 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 20 16:02:28 crc kubenswrapper[4730]: I0320 16:02:28.623342 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Mar 20 16:02:28 crc kubenswrapper[4730]: I0320 16:02:28.639870 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Mar 20 16:02:28 crc kubenswrapper[4730]: I0320 16:02:28.697326 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e788dfbe-bc18-46f9-b2bf-674940e1c392-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e788dfbe-bc18-46f9-b2bf-674940e1c392\") " pod="openstack/nova-cell1-conductor-0" Mar 20 16:02:28 crc kubenswrapper[4730]: I0320 16:02:28.697933 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e788dfbe-bc18-46f9-b2bf-674940e1c392-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e788dfbe-bc18-46f9-b2bf-674940e1c392\") " pod="openstack/nova-cell1-conductor-0" Mar 20 16:02:28 crc kubenswrapper[4730]: I0320 16:02:28.698380 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsjnl\" (UniqueName: \"kubernetes.io/projected/e788dfbe-bc18-46f9-b2bf-674940e1c392-kube-api-access-gsjnl\") pod \"nova-cell1-conductor-0\" (UID: \"e788dfbe-bc18-46f9-b2bf-674940e1c392\") " pod="openstack/nova-cell1-conductor-0" Mar 20 16:02:28 crc kubenswrapper[4730]: I0320 16:02:28.800162 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e788dfbe-bc18-46f9-b2bf-674940e1c392-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e788dfbe-bc18-46f9-b2bf-674940e1c392\") " pod="openstack/nova-cell1-conductor-0" Mar 20 16:02:28 crc kubenswrapper[4730]: I0320 16:02:28.800384 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsjnl\" (UniqueName: \"kubernetes.io/projected/e788dfbe-bc18-46f9-b2bf-674940e1c392-kube-api-access-gsjnl\") pod \"nova-cell1-conductor-0\" (UID: \"e788dfbe-bc18-46f9-b2bf-674940e1c392\") " pod="openstack/nova-cell1-conductor-0" Mar 20 16:02:28 crc kubenswrapper[4730]: I0320 16:02:28.800438 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e788dfbe-bc18-46f9-b2bf-674940e1c392-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e788dfbe-bc18-46f9-b2bf-674940e1c392\") " pod="openstack/nova-cell1-conductor-0" Mar 20 16:02:28 crc kubenswrapper[4730]: I0320 16:02:28.804695 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e788dfbe-bc18-46f9-b2bf-674940e1c392-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e788dfbe-bc18-46f9-b2bf-674940e1c392\") " pod="openstack/nova-cell1-conductor-0" Mar 20 16:02:28 crc kubenswrapper[4730]: I0320 16:02:28.806043 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e788dfbe-bc18-46f9-b2bf-674940e1c392-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e788dfbe-bc18-46f9-b2bf-674940e1c392\") " pod="openstack/nova-cell1-conductor-0" Mar 20 16:02:28 crc kubenswrapper[4730]: I0320 16:02:28.820650 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsjnl\" (UniqueName: \"kubernetes.io/projected/e788dfbe-bc18-46f9-b2bf-674940e1c392-kube-api-access-gsjnl\") pod \"nova-cell1-conductor-0\" (UID: \"e788dfbe-bc18-46f9-b2bf-674940e1c392\") " pod="openstack/nova-cell1-conductor-0" Mar 20 16:02:28 crc kubenswrapper[4730]: I0320 16:02:28.918583 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 20 16:02:29 crc kubenswrapper[4730]: I0320 16:02:29.365401 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 20 16:02:29 crc kubenswrapper[4730]: I0320 16:02:29.509506 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e788dfbe-bc18-46f9-b2bf-674940e1c392","Type":"ContainerStarted","Data":"157a6ca1ded6c375598cd97c6882947a4130ebeb518e793e9a94ebfd5f9212a3"} Mar 20 16:02:29 crc kubenswrapper[4730]: I0320 16:02:29.513407 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3edc02b8-2451-4edf-a79d-fc86a078de83","Type":"ContainerStarted","Data":"f01bb1ce3fdb271555994504b12501ddebf5c6f44d1ae92e8481bfe809a7796f"} Mar 20 16:02:29 crc kubenswrapper[4730]: I0320 16:02:29.525886 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Mar 20 16:02:29 crc kubenswrapper[4730]: I0320 16:02:29.541234 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.541215639 podStartE2EDuration="2.541215639s" podCreationTimestamp="2026-03-20 16:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:02:29.531643664 +0000 UTC m=+1408.745015043" watchObservedRunningTime="2026-03-20 16:02:29.541215639 +0000 UTC m=+1408.754587008" Mar 20 16:02:29 crc kubenswrapper[4730]: I0320 16:02:29.548564 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01c93007-4d31-47a2-810c-819caf917e43" path="/var/lib/kubelet/pods/01c93007-4d31-47a2-810c-819caf917e43/volumes" Mar 20 16:02:29 crc kubenswrapper[4730]: E0320 16:02:29.976065 4730 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ed4f7daeff4a5766b3f240bd0c76702ab4bff0db1848809b07121ff30e3a5bfc" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Mar 20 16:02:29 crc kubenswrapper[4730]: E0320 16:02:29.977491 4730 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ed4f7daeff4a5766b3f240bd0c76702ab4bff0db1848809b07121ff30e3a5bfc" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Mar 20 16:02:29 crc kubenswrapper[4730]: E0320 16:02:29.978479 4730 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ed4f7daeff4a5766b3f240bd0c76702ab4bff0db1848809b07121ff30e3a5bfc" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Mar 20 16:02:29 crc kubenswrapper[4730]: E0320 16:02:29.978516 4730 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-decision-engine-0" podUID="3f6c808e-d523-48bd-8ec2-28b625834317" containerName="watcher-decision-engine" Mar 20 16:02:30 crc kubenswrapper[4730]: I0320 16:02:30.523637 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e788dfbe-bc18-46f9-b2bf-674940e1c392","Type":"ContainerStarted","Data":"274814869f12b52bd4d369d60b217bb93eba1b6cde19af52b744d15dbc1c07a7"} Mar 20 16:02:30 crc kubenswrapper[4730]: I0320 16:02:30.548100 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.548079832 podStartE2EDuration="2.548079832s" podCreationTimestamp="2026-03-20 16:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:02:30.537861978 +0000 UTC m=+1409.751233357" watchObservedRunningTime="2026-03-20 16:02:30.548079832 +0000 UTC m=+1409.761451211" Mar 20 16:02:30 crc kubenswrapper[4730]: I0320 16:02:30.768202 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Mar 20 16:02:30 crc kubenswrapper[4730]: I0320 16:02:30.795629 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Mar 20 16:02:31 crc kubenswrapper[4730]: E0320 16:02:31.394492 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod279d2368_abe1_465a_9007_68542e5dbfc4.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod279d2368_abe1_465a_9007_68542e5dbfc4.slice/crio-da2c214fcdb33fae608237a0fb1d01559481f0b67a4afa1e6c930298a64b75a2\": RecentStats: unable to find data in memory cache]" Mar 20 16:02:31 crc kubenswrapper[4730]: I0320 16:02:31.566886 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 20 16:02:31 crc kubenswrapper[4730]: I0320 16:02:31.568966 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Mar 20 16:02:32 crc kubenswrapper[4730]: I0320 16:02:32.891633 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 20 16:02:34 crc kubenswrapper[4730]: I0320 16:02:34.948216 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 20 16:02:34 crc kubenswrapper[4730]: I0320 16:02:34.948765 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 20 16:02:35 crc kubenswrapper[4730]: I0320 16:02:35.917162 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 16:02:35 crc kubenswrapper[4730]: I0320 16:02:35.917207 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 16:02:35 crc kubenswrapper[4730]: I0320 16:02:35.957911 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c9bfb6c0-4971-4a58-aacc-17636a95b8a4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 16:02:35 crc kubenswrapper[4730]: I0320 16:02:35.966564 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c9bfb6c0-4971-4a58-aacc-17636a95b8a4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 16:02:37 crc kubenswrapper[4730]: I0320 16:02:37.001437 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fc78da88-5699-44ed-af14-7627ea6191f9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 16:02:37 crc kubenswrapper[4730]: I0320 16:02:37.001534 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fc78da88-5699-44ed-af14-7627ea6191f9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 16:02:37 crc kubenswrapper[4730]: I0320 16:02:37.890967 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 20 16:02:37 crc kubenswrapper[4730]: I0320 16:02:37.939599 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 20 16:02:38 crc kubenswrapper[4730]: I0320 16:02:38.651469 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 20 16:02:38 crc kubenswrapper[4730]: I0320 16:02:38.953487 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 20 16:02:41 crc kubenswrapper[4730]: E0320 16:02:41.631162 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod279d2368_abe1_465a_9007_68542e5dbfc4.slice/crio-da2c214fcdb33fae608237a0fb1d01559481f0b67a4afa1e6c930298a64b75a2\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod279d2368_abe1_465a_9007_68542e5dbfc4.slice\": RecentStats: unable to find data in memory cache]" Mar 20 16:02:42 crc kubenswrapper[4730]: I0320 16:02:42.949395 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 20 16:02:42 crc kubenswrapper[4730]: I0320 16:02:42.950809 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.405771 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.518222 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3f6c808e-d523-48bd-8ec2-28b625834317-custom-prometheus-ca\") pod \"3f6c808e-d523-48bd-8ec2-28b625834317\" (UID: \"3f6c808e-d523-48bd-8ec2-28b625834317\") " Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.518366 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f6c808e-d523-48bd-8ec2-28b625834317-logs\") pod \"3f6c808e-d523-48bd-8ec2-28b625834317\" (UID: \"3f6c808e-d523-48bd-8ec2-28b625834317\") " Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.518404 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f6c808e-d523-48bd-8ec2-28b625834317-config-data\") pod \"3f6c808e-d523-48bd-8ec2-28b625834317\" (UID: \"3f6c808e-d523-48bd-8ec2-28b625834317\") " Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.518477 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f6c808e-d523-48bd-8ec2-28b625834317-combined-ca-bundle\") pod \"3f6c808e-d523-48bd-8ec2-28b625834317\" (UID: \"3f6c808e-d523-48bd-8ec2-28b625834317\") " Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.518674 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z2tj\" (UniqueName: \"kubernetes.io/projected/3f6c808e-d523-48bd-8ec2-28b625834317-kube-api-access-2z2tj\") pod \"3f6c808e-d523-48bd-8ec2-28b625834317\" (UID: \"3f6c808e-d523-48bd-8ec2-28b625834317\") " Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.518776 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f6c808e-d523-48bd-8ec2-28b625834317-logs" (OuterVolumeSpecName: "logs") pod "3f6c808e-d523-48bd-8ec2-28b625834317" (UID: "3f6c808e-d523-48bd-8ec2-28b625834317"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.519207 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f6c808e-d523-48bd-8ec2-28b625834317-logs\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.526221 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f6c808e-d523-48bd-8ec2-28b625834317-kube-api-access-2z2tj" (OuterVolumeSpecName: "kube-api-access-2z2tj") pod "3f6c808e-d523-48bd-8ec2-28b625834317" (UID: "3f6c808e-d523-48bd-8ec2-28b625834317"). InnerVolumeSpecName "kube-api-access-2z2tj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.551298 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f6c808e-d523-48bd-8ec2-28b625834317-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "3f6c808e-d523-48bd-8ec2-28b625834317" (UID: "3f6c808e-d523-48bd-8ec2-28b625834317"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.561967 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f6c808e-d523-48bd-8ec2-28b625834317-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f6c808e-d523-48bd-8ec2-28b625834317" (UID: "3f6c808e-d523-48bd-8ec2-28b625834317"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.583981 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f6c808e-d523-48bd-8ec2-28b625834317-config-data" (OuterVolumeSpecName: "config-data") pod "3f6c808e-d523-48bd-8ec2-28b625834317" (UID: "3f6c808e-d523-48bd-8ec2-28b625834317"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.624137 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2z2tj\" (UniqueName: \"kubernetes.io/projected/3f6c808e-d523-48bd-8ec2-28b625834317-kube-api-access-2z2tj\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.624173 4730 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3f6c808e-d523-48bd-8ec2-28b625834317-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.624185 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f6c808e-d523-48bd-8ec2-28b625834317-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.624196 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f6c808e-d523-48bd-8ec2-28b625834317-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.680170 4730 generic.go:334] "Generic (PLEG): container finished" podID="3f6c808e-d523-48bd-8ec2-28b625834317" containerID="ed4f7daeff4a5766b3f240bd0c76702ab4bff0db1848809b07121ff30e3a5bfc" exitCode=137 Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.680262 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.680278 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3f6c808e-d523-48bd-8ec2-28b625834317","Type":"ContainerDied","Data":"ed4f7daeff4a5766b3f240bd0c76702ab4bff0db1848809b07121ff30e3a5bfc"} Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.680330 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3f6c808e-d523-48bd-8ec2-28b625834317","Type":"ContainerDied","Data":"d03f482cee6be89afdaaa20261b8838840560db337ea7e22e3e773497fe55805"} Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.680350 4730 scope.go:117] "RemoveContainer" containerID="ed4f7daeff4a5766b3f240bd0c76702ab4bff0db1848809b07121ff30e3a5bfc" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.699077 4730 scope.go:117] "RemoveContainer" containerID="ca3ac5b513d25322badcc2bf19b245d687c9ccf8bff6c35cf5794c95ec2ab964" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.724587 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.742194 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.747169 4730 scope.go:117] "RemoveContainer" containerID="ed4f7daeff4a5766b3f240bd0c76702ab4bff0db1848809b07121ff30e3a5bfc" Mar 20 16:02:43 crc kubenswrapper[4730]: E0320 16:02:43.747571 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed4f7daeff4a5766b3f240bd0c76702ab4bff0db1848809b07121ff30e3a5bfc\": container with ID starting with ed4f7daeff4a5766b3f240bd0c76702ab4bff0db1848809b07121ff30e3a5bfc not found: ID does not exist" containerID="ed4f7daeff4a5766b3f240bd0c76702ab4bff0db1848809b07121ff30e3a5bfc" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.747609 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed4f7daeff4a5766b3f240bd0c76702ab4bff0db1848809b07121ff30e3a5bfc"} err="failed to get container status \"ed4f7daeff4a5766b3f240bd0c76702ab4bff0db1848809b07121ff30e3a5bfc\": rpc error: code = NotFound desc = could not find container \"ed4f7daeff4a5766b3f240bd0c76702ab4bff0db1848809b07121ff30e3a5bfc\": container with ID starting with ed4f7daeff4a5766b3f240bd0c76702ab4bff0db1848809b07121ff30e3a5bfc not found: ID does not exist" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.747636 4730 scope.go:117] "RemoveContainer" containerID="ca3ac5b513d25322badcc2bf19b245d687c9ccf8bff6c35cf5794c95ec2ab964" Mar 20 16:02:43 crc kubenswrapper[4730]: E0320 16:02:43.747860 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca3ac5b513d25322badcc2bf19b245d687c9ccf8bff6c35cf5794c95ec2ab964\": container with ID starting with ca3ac5b513d25322badcc2bf19b245d687c9ccf8bff6c35cf5794c95ec2ab964 not found: ID does not exist" containerID="ca3ac5b513d25322badcc2bf19b245d687c9ccf8bff6c35cf5794c95ec2ab964" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.747888 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca3ac5b513d25322badcc2bf19b245d687c9ccf8bff6c35cf5794c95ec2ab964"} err="failed to get container status \"ca3ac5b513d25322badcc2bf19b245d687c9ccf8bff6c35cf5794c95ec2ab964\": rpc error: code = NotFound desc = could not find container \"ca3ac5b513d25322badcc2bf19b245d687c9ccf8bff6c35cf5794c95ec2ab964\": container with ID starting with ca3ac5b513d25322badcc2bf19b245d687c9ccf8bff6c35cf5794c95ec2ab964 not found: ID does not exist" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.752822 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 20 16:02:43 crc kubenswrapper[4730]: E0320 16:02:43.753344 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f6c808e-d523-48bd-8ec2-28b625834317" containerName="watcher-decision-engine" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.753368 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f6c808e-d523-48bd-8ec2-28b625834317" containerName="watcher-decision-engine" Mar 20 16:02:43 crc kubenswrapper[4730]: E0320 16:02:43.753380 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f6c808e-d523-48bd-8ec2-28b625834317" containerName="watcher-decision-engine" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.753386 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f6c808e-d523-48bd-8ec2-28b625834317" containerName="watcher-decision-engine" Mar 20 16:02:43 crc kubenswrapper[4730]: E0320 16:02:43.753400 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f6c808e-d523-48bd-8ec2-28b625834317" containerName="watcher-decision-engine" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.753407 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f6c808e-d523-48bd-8ec2-28b625834317" containerName="watcher-decision-engine" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.753608 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f6c808e-d523-48bd-8ec2-28b625834317" containerName="watcher-decision-engine" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.753619 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f6c808e-d523-48bd-8ec2-28b625834317" containerName="watcher-decision-engine" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.753638 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f6c808e-d523-48bd-8ec2-28b625834317" containerName="watcher-decision-engine" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.753648 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f6c808e-d523-48bd-8ec2-28b625834317" containerName="watcher-decision-engine" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.754582 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.756291 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.762308 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.827945 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4adb002b-165b-4e7c-9e26-0a98f30dd467-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"4adb002b-165b-4e7c-9e26-0a98f30dd467\") " pod="openstack/watcher-decision-engine-0" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.828079 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4adb002b-165b-4e7c-9e26-0a98f30dd467-config-data\") pod \"watcher-decision-engine-0\" (UID: \"4adb002b-165b-4e7c-9e26-0a98f30dd467\") " pod="openstack/watcher-decision-engine-0" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.828136 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4adb002b-165b-4e7c-9e26-0a98f30dd467-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"4adb002b-165b-4e7c-9e26-0a98f30dd467\") " pod="openstack/watcher-decision-engine-0" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.828180 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4adb002b-165b-4e7c-9e26-0a98f30dd467-logs\") pod \"watcher-decision-engine-0\" (UID: \"4adb002b-165b-4e7c-9e26-0a98f30dd467\") " pod="openstack/watcher-decision-engine-0" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.828321 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jchf\" (UniqueName: \"kubernetes.io/projected/4adb002b-165b-4e7c-9e26-0a98f30dd467-kube-api-access-9jchf\") pod \"watcher-decision-engine-0\" (UID: \"4adb002b-165b-4e7c-9e26-0a98f30dd467\") " pod="openstack/watcher-decision-engine-0" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.916985 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.918377 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.930400 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jchf\" (UniqueName: \"kubernetes.io/projected/4adb002b-165b-4e7c-9e26-0a98f30dd467-kube-api-access-9jchf\") pod \"watcher-decision-engine-0\" (UID: \"4adb002b-165b-4e7c-9e26-0a98f30dd467\") " pod="openstack/watcher-decision-engine-0" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.930480 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4adb002b-165b-4e7c-9e26-0a98f30dd467-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"4adb002b-165b-4e7c-9e26-0a98f30dd467\") " pod="openstack/watcher-decision-engine-0" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.930541 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4adb002b-165b-4e7c-9e26-0a98f30dd467-config-data\") pod \"watcher-decision-engine-0\" (UID: \"4adb002b-165b-4e7c-9e26-0a98f30dd467\") " pod="openstack/watcher-decision-engine-0" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.930580 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4adb002b-165b-4e7c-9e26-0a98f30dd467-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"4adb002b-165b-4e7c-9e26-0a98f30dd467\") " pod="openstack/watcher-decision-engine-0" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.930609 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4adb002b-165b-4e7c-9e26-0a98f30dd467-logs\") pod \"watcher-decision-engine-0\" (UID: \"4adb002b-165b-4e7c-9e26-0a98f30dd467\") " pod="openstack/watcher-decision-engine-0" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.930998 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4adb002b-165b-4e7c-9e26-0a98f30dd467-logs\") pod \"watcher-decision-engine-0\" (UID: \"4adb002b-165b-4e7c-9e26-0a98f30dd467\") " pod="openstack/watcher-decision-engine-0" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.934841 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4adb002b-165b-4e7c-9e26-0a98f30dd467-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"4adb002b-165b-4e7c-9e26-0a98f30dd467\") " pod="openstack/watcher-decision-engine-0" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.935079 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4adb002b-165b-4e7c-9e26-0a98f30dd467-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"4adb002b-165b-4e7c-9e26-0a98f30dd467\") " pod="openstack/watcher-decision-engine-0" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.935397 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4adb002b-165b-4e7c-9e26-0a98f30dd467-config-data\") pod \"watcher-decision-engine-0\" (UID: \"4adb002b-165b-4e7c-9e26-0a98f30dd467\") " pod="openstack/watcher-decision-engine-0" Mar 20 16:02:43 crc kubenswrapper[4730]: I0320 16:02:43.952519 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jchf\" (UniqueName: \"kubernetes.io/projected/4adb002b-165b-4e7c-9e26-0a98f30dd467-kube-api-access-9jchf\") pod \"watcher-decision-engine-0\" (UID: \"4adb002b-165b-4e7c-9e26-0a98f30dd467\") " pod="openstack/watcher-decision-engine-0" Mar 20 16:02:44 crc kubenswrapper[4730]: I0320 16:02:44.078127 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 20 16:02:44 crc kubenswrapper[4730]: I0320 16:02:44.671971 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 20 16:02:44 crc kubenswrapper[4730]: I0320 16:02:44.702763 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"4adb002b-165b-4e7c-9e26-0a98f30dd467","Type":"ContainerStarted","Data":"d108cf722bb9b9c48b9ba9b8abd0f35e60b17aab1e0179d47e3ead893c3f4026"} Mar 20 16:02:44 crc kubenswrapper[4730]: I0320 16:02:44.807874 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 20 16:02:44 crc kubenswrapper[4730]: I0320 16:02:44.953955 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 20 16:02:44 crc kubenswrapper[4730]: I0320 16:02:44.955036 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 20 16:02:44 crc kubenswrapper[4730]: I0320 16:02:44.964726 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.528435 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.544604 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f6c808e-d523-48bd-8ec2-28b625834317" path="/var/lib/kubelet/pods/3f6c808e-d523-48bd-8ec2-28b625834317/volumes" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.560104 4730 scope.go:117] "RemoveContainer" containerID="2ec54c009b326db4c49da642b8ab1232405aacb430ead248fe894a34dfe7c452" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.572220 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f1bcc8c-7598-4c25-aaa7-0a9636c0729c-config-data\") pod \"1f1bcc8c-7598-4c25-aaa7-0a9636c0729c\" (UID: \"1f1bcc8c-7598-4c25-aaa7-0a9636c0729c\") " Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.572318 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f1bcc8c-7598-4c25-aaa7-0a9636c0729c-combined-ca-bundle\") pod \"1f1bcc8c-7598-4c25-aaa7-0a9636c0729c\" (UID: \"1f1bcc8c-7598-4c25-aaa7-0a9636c0729c\") " Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.572525 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmvzz\" (UniqueName: \"kubernetes.io/projected/1f1bcc8c-7598-4c25-aaa7-0a9636c0729c-kube-api-access-zmvzz\") pod \"1f1bcc8c-7598-4c25-aaa7-0a9636c0729c\" (UID: \"1f1bcc8c-7598-4c25-aaa7-0a9636c0729c\") " Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.602724 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f1bcc8c-7598-4c25-aaa7-0a9636c0729c-kube-api-access-zmvzz" (OuterVolumeSpecName: "kube-api-access-zmvzz") pod "1f1bcc8c-7598-4c25-aaa7-0a9636c0729c" (UID: "1f1bcc8c-7598-4c25-aaa7-0a9636c0729c"). InnerVolumeSpecName "kube-api-access-zmvzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.619978 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f1bcc8c-7598-4c25-aaa7-0a9636c0729c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f1bcc8c-7598-4c25-aaa7-0a9636c0729c" (UID: "1f1bcc8c-7598-4c25-aaa7-0a9636c0729c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.640624 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f1bcc8c-7598-4c25-aaa7-0a9636c0729c-config-data" (OuterVolumeSpecName: "config-data") pod "1f1bcc8c-7598-4c25-aaa7-0a9636c0729c" (UID: "1f1bcc8c-7598-4c25-aaa7-0a9636c0729c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.674939 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmvzz\" (UniqueName: \"kubernetes.io/projected/1f1bcc8c-7598-4c25-aaa7-0a9636c0729c-kube-api-access-zmvzz\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.674990 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f1bcc8c-7598-4c25-aaa7-0a9636c0729c-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.675002 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f1bcc8c-7598-4c25-aaa7-0a9636c0729c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.721810 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"4adb002b-165b-4e7c-9e26-0a98f30dd467","Type":"ContainerStarted","Data":"37384daa758ca62f06b21e724f38767164da4ab8c9fc86eed747236b08d6fc3b"} Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.723420 4730 generic.go:334] "Generic (PLEG): container finished" podID="1f1bcc8c-7598-4c25-aaa7-0a9636c0729c" containerID="c51b7b65896e9d86f7a00a27c41da014a980db7c3cc6f6c5953fd8afdde0f897" exitCode=137 Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.724563 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.725197 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1f1bcc8c-7598-4c25-aaa7-0a9636c0729c","Type":"ContainerDied","Data":"c51b7b65896e9d86f7a00a27c41da014a980db7c3cc6f6c5953fd8afdde0f897"} Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.725230 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1f1bcc8c-7598-4c25-aaa7-0a9636c0729c","Type":"ContainerDied","Data":"bb467f0f9108cd2d4075fdbdf95b9f10f944692ab74865e527713ebadb5491f5"} Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.725245 4730 scope.go:117] "RemoveContainer" containerID="c51b7b65896e9d86f7a00a27c41da014a980db7c3cc6f6c5953fd8afdde0f897" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.745294 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.746171 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.7461525780000002 podStartE2EDuration="2.746152578s" podCreationTimestamp="2026-03-20 16:02:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:02:45.740626549 +0000 UTC m=+1424.953997928" watchObservedRunningTime="2026-03-20 16:02:45.746152578 +0000 UTC m=+1424.959523947" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.770479 4730 scope.go:117] "RemoveContainer" containerID="c51b7b65896e9d86f7a00a27c41da014a980db7c3cc6f6c5953fd8afdde0f897" Mar 20 16:02:46 crc kubenswrapper[4730]: E0320 16:02:45.771389 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c51b7b65896e9d86f7a00a27c41da014a980db7c3cc6f6c5953fd8afdde0f897\": container with ID starting with c51b7b65896e9d86f7a00a27c41da014a980db7c3cc6f6c5953fd8afdde0f897 not found: ID does not exist" containerID="c51b7b65896e9d86f7a00a27c41da014a980db7c3cc6f6c5953fd8afdde0f897" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.771447 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c51b7b65896e9d86f7a00a27c41da014a980db7c3cc6f6c5953fd8afdde0f897"} err="failed to get container status \"c51b7b65896e9d86f7a00a27c41da014a980db7c3cc6f6c5953fd8afdde0f897\": rpc error: code = NotFound desc = could not find container \"c51b7b65896e9d86f7a00a27c41da014a980db7c3cc6f6c5953fd8afdde0f897\": container with ID starting with c51b7b65896e9d86f7a00a27c41da014a980db7c3cc6f6c5953fd8afdde0f897 not found: ID does not exist" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.774313 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.783333 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.793429 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 16:02:46 crc kubenswrapper[4730]: E0320 16:02:45.793971 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f1bcc8c-7598-4c25-aaa7-0a9636c0729c" containerName="nova-cell1-novncproxy-novncproxy" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.793991 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f1bcc8c-7598-4c25-aaa7-0a9636c0729c" containerName="nova-cell1-novncproxy-novncproxy" Mar 20 16:02:46 crc kubenswrapper[4730]: E0320 16:02:45.794033 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f6c808e-d523-48bd-8ec2-28b625834317" containerName="watcher-decision-engine" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.794041 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f6c808e-d523-48bd-8ec2-28b625834317" containerName="watcher-decision-engine" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.794301 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f6c808e-d523-48bd-8ec2-28b625834317" containerName="watcher-decision-engine" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.794327 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f1bcc8c-7598-4c25-aaa7-0a9636c0729c" containerName="nova-cell1-novncproxy-novncproxy" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.795175 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.800753 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.800979 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.801384 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.820146 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.880776 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dksk\" (UniqueName: \"kubernetes.io/projected/6586493e-e5d0-4504-b516-ebaac5defd79-kube-api-access-9dksk\") pod \"nova-cell1-novncproxy-0\" (UID: \"6586493e-e5d0-4504-b516-ebaac5defd79\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.880953 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6586493e-e5d0-4504-b516-ebaac5defd79-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6586493e-e5d0-4504-b516-ebaac5defd79\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.880983 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6586493e-e5d0-4504-b516-ebaac5defd79-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6586493e-e5d0-4504-b516-ebaac5defd79\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.880999 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6586493e-e5d0-4504-b516-ebaac5defd79-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6586493e-e5d0-4504-b516-ebaac5defd79\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.881017 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6586493e-e5d0-4504-b516-ebaac5defd79-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6586493e-e5d0-4504-b516-ebaac5defd79\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.923127 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.924978 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.935792 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.983225 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6586493e-e5d0-4504-b516-ebaac5defd79-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6586493e-e5d0-4504-b516-ebaac5defd79\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.983302 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6586493e-e5d0-4504-b516-ebaac5defd79-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6586493e-e5d0-4504-b516-ebaac5defd79\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.983323 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6586493e-e5d0-4504-b516-ebaac5defd79-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6586493e-e5d0-4504-b516-ebaac5defd79\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.983351 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6586493e-e5d0-4504-b516-ebaac5defd79-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6586493e-e5d0-4504-b516-ebaac5defd79\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.983397 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dksk\" (UniqueName: \"kubernetes.io/projected/6586493e-e5d0-4504-b516-ebaac5defd79-kube-api-access-9dksk\") pod \"nova-cell1-novncproxy-0\" (UID: \"6586493e-e5d0-4504-b516-ebaac5defd79\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.988591 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6586493e-e5d0-4504-b516-ebaac5defd79-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6586493e-e5d0-4504-b516-ebaac5defd79\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.988745 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6586493e-e5d0-4504-b516-ebaac5defd79-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6586493e-e5d0-4504-b516-ebaac5defd79\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.993210 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6586493e-e5d0-4504-b516-ebaac5defd79-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6586493e-e5d0-4504-b516-ebaac5defd79\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:45.993891 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6586493e-e5d0-4504-b516-ebaac5defd79-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6586493e-e5d0-4504-b516-ebaac5defd79\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:46.002195 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dksk\" (UniqueName: \"kubernetes.io/projected/6586493e-e5d0-4504-b516-ebaac5defd79-kube-api-access-9dksk\") pod \"nova-cell1-novncproxy-0\" (UID: \"6586493e-e5d0-4504-b516-ebaac5defd79\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:46.135952 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:46.604909 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:46.740788 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6586493e-e5d0-4504-b516-ebaac5defd79","Type":"ContainerStarted","Data":"8ba4c5896280523a1ab721d88ca88b134df3056e6f891eadeda5c2cfecb2910c"} Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:46.749515 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:46.936457 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f99bcbd6f-lkgpt"] Mar 20 16:02:46 crc kubenswrapper[4730]: E0320 16:02:46.937163 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f6c808e-d523-48bd-8ec2-28b625834317" containerName="watcher-decision-engine" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:46.937176 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f6c808e-d523-48bd-8ec2-28b625834317" containerName="watcher-decision-engine" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:46.938547 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" Mar 20 16:02:46 crc kubenswrapper[4730]: I0320 16:02:46.968768 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f99bcbd6f-lkgpt"] Mar 20 16:02:47 crc kubenswrapper[4730]: I0320 16:02:47.012043 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-dns-swift-storage-0\") pod \"dnsmasq-dns-7f99bcbd6f-lkgpt\" (UID: \"eee8e670-a743-4284-bde0-5a8a77d8058e\") " pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" Mar 20 16:02:47 crc kubenswrapper[4730]: I0320 16:02:47.012128 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-dns-svc\") pod \"dnsmasq-dns-7f99bcbd6f-lkgpt\" (UID: \"eee8e670-a743-4284-bde0-5a8a77d8058e\") " pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" Mar 20 16:02:47 crc kubenswrapper[4730]: I0320 16:02:47.012170 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-ovsdbserver-nb\") pod \"dnsmasq-dns-7f99bcbd6f-lkgpt\" (UID: \"eee8e670-a743-4284-bde0-5a8a77d8058e\") " pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" Mar 20 16:02:47 crc kubenswrapper[4730]: I0320 16:02:47.012210 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-config\") pod \"dnsmasq-dns-7f99bcbd6f-lkgpt\" (UID: \"eee8e670-a743-4284-bde0-5a8a77d8058e\") " pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" Mar 20 16:02:47 crc kubenswrapper[4730]: I0320 16:02:47.012362 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmzv5\" (UniqueName: \"kubernetes.io/projected/eee8e670-a743-4284-bde0-5a8a77d8058e-kube-api-access-jmzv5\") pod \"dnsmasq-dns-7f99bcbd6f-lkgpt\" (UID: \"eee8e670-a743-4284-bde0-5a8a77d8058e\") " pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" Mar 20 16:02:47 crc kubenswrapper[4730]: I0320 16:02:47.012787 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-ovsdbserver-sb\") pod \"dnsmasq-dns-7f99bcbd6f-lkgpt\" (UID: \"eee8e670-a743-4284-bde0-5a8a77d8058e\") " pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" Mar 20 16:02:47 crc kubenswrapper[4730]: I0320 16:02:47.114852 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmzv5\" (UniqueName: \"kubernetes.io/projected/eee8e670-a743-4284-bde0-5a8a77d8058e-kube-api-access-jmzv5\") pod \"dnsmasq-dns-7f99bcbd6f-lkgpt\" (UID: \"eee8e670-a743-4284-bde0-5a8a77d8058e\") " pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" Mar 20 16:02:47 crc kubenswrapper[4730]: I0320 16:02:47.115020 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-ovsdbserver-sb\") pod \"dnsmasq-dns-7f99bcbd6f-lkgpt\" (UID: \"eee8e670-a743-4284-bde0-5a8a77d8058e\") " pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" Mar 20 16:02:47 crc kubenswrapper[4730]: I0320 16:02:47.116170 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-ovsdbserver-sb\") pod \"dnsmasq-dns-7f99bcbd6f-lkgpt\" (UID: \"eee8e670-a743-4284-bde0-5a8a77d8058e\") " pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" Mar 20 16:02:47 crc kubenswrapper[4730]: I0320 16:02:47.116373 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-dns-swift-storage-0\") pod \"dnsmasq-dns-7f99bcbd6f-lkgpt\" (UID: \"eee8e670-a743-4284-bde0-5a8a77d8058e\") " pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" Mar 20 16:02:47 crc kubenswrapper[4730]: I0320 16:02:47.116438 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-dns-svc\") pod \"dnsmasq-dns-7f99bcbd6f-lkgpt\" (UID: \"eee8e670-a743-4284-bde0-5a8a77d8058e\") " pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" Mar 20 16:02:47 crc kubenswrapper[4730]: I0320 16:02:47.116464 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-ovsdbserver-nb\") pod \"dnsmasq-dns-7f99bcbd6f-lkgpt\" (UID: \"eee8e670-a743-4284-bde0-5a8a77d8058e\") " pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" Mar 20 16:02:47 crc kubenswrapper[4730]: I0320 16:02:47.116488 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-config\") pod \"dnsmasq-dns-7f99bcbd6f-lkgpt\" (UID: \"eee8e670-a743-4284-bde0-5a8a77d8058e\") " pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" Mar 20 16:02:47 crc kubenswrapper[4730]: I0320 16:02:47.117155 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-dns-svc\") pod \"dnsmasq-dns-7f99bcbd6f-lkgpt\" (UID: \"eee8e670-a743-4284-bde0-5a8a77d8058e\") " pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" Mar 20 16:02:47 crc kubenswrapper[4730]: I0320 16:02:47.117321 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-dns-swift-storage-0\") pod \"dnsmasq-dns-7f99bcbd6f-lkgpt\" (UID: \"eee8e670-a743-4284-bde0-5a8a77d8058e\") " pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" Mar 20 16:02:47 crc kubenswrapper[4730]: I0320 16:02:47.117744 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-config\") pod \"dnsmasq-dns-7f99bcbd6f-lkgpt\" (UID: \"eee8e670-a743-4284-bde0-5a8a77d8058e\") " pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" Mar 20 16:02:47 crc kubenswrapper[4730]: I0320 16:02:47.117929 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-ovsdbserver-nb\") pod \"dnsmasq-dns-7f99bcbd6f-lkgpt\" (UID: \"eee8e670-a743-4284-bde0-5a8a77d8058e\") " pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" Mar 20 16:02:47 crc kubenswrapper[4730]: I0320 16:02:47.135286 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmzv5\" (UniqueName: \"kubernetes.io/projected/eee8e670-a743-4284-bde0-5a8a77d8058e-kube-api-access-jmzv5\") pod \"dnsmasq-dns-7f99bcbd6f-lkgpt\" (UID: \"eee8e670-a743-4284-bde0-5a8a77d8058e\") " pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" Mar 20 16:02:47 crc kubenswrapper[4730]: I0320 16:02:47.294350 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" Mar 20 16:02:47 crc kubenswrapper[4730]: I0320 16:02:47.560101 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f1bcc8c-7598-4c25-aaa7-0a9636c0729c" path="/var/lib/kubelet/pods/1f1bcc8c-7598-4c25-aaa7-0a9636c0729c/volumes" Mar 20 16:02:47 crc kubenswrapper[4730]: I0320 16:02:47.753198 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6586493e-e5d0-4504-b516-ebaac5defd79","Type":"ContainerStarted","Data":"87b4a3a716a45fcaa15c2114e60d8ca03a4b2288d8a98246deaad952b89c93a2"} Mar 20 16:02:47 crc kubenswrapper[4730]: I0320 16:02:47.770960 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.770943321 podStartE2EDuration="2.770943321s" podCreationTimestamp="2026-03-20 16:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:02:47.770498888 +0000 UTC m=+1426.983870257" watchObservedRunningTime="2026-03-20 16:02:47.770943321 +0000 UTC m=+1426.984314690" Mar 20 16:02:48 crc kubenswrapper[4730]: I0320 16:02:48.008990 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f99bcbd6f-lkgpt"] Mar 20 16:02:48 crc kubenswrapper[4730]: I0320 16:02:48.762385 4730 generic.go:334] "Generic (PLEG): container finished" podID="eee8e670-a743-4284-bde0-5a8a77d8058e" containerID="eb503bcef68144de38c1262fa118798748090de6157da609f58bcca7e2bbdbcc" exitCode=0 Mar 20 16:02:48 crc kubenswrapper[4730]: I0320 16:02:48.763721 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" event={"ID":"eee8e670-a743-4284-bde0-5a8a77d8058e","Type":"ContainerDied","Data":"eb503bcef68144de38c1262fa118798748090de6157da609f58bcca7e2bbdbcc"} Mar 20 16:02:48 crc kubenswrapper[4730]: I0320 16:02:48.763786 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" event={"ID":"eee8e670-a743-4284-bde0-5a8a77d8058e","Type":"ContainerStarted","Data":"0440fe1ccefaeb24724ee16cfedb746eead11f6c1b7d84e1056a70f539a4c85b"} Mar 20 16:02:49 crc kubenswrapper[4730]: I0320 16:02:49.625887 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 16:02:49 crc kubenswrapper[4730]: I0320 16:02:49.774388 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fc78da88-5699-44ed-af14-7627ea6191f9" containerName="nova-api-log" containerID="cri-o://42f9304ba2a3640bc2a608f72633df51d75830859b8b512c72f883ee6ebbc449" gracePeriod=30 Mar 20 16:02:49 crc kubenswrapper[4730]: I0320 16:02:49.776370 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" event={"ID":"eee8e670-a743-4284-bde0-5a8a77d8058e","Type":"ContainerStarted","Data":"b9b68041f5e6d75af1b6ac50a20d8efa0a3cf0ab36fd4d2c2cf619950c911910"} Mar 20 16:02:49 crc kubenswrapper[4730]: I0320 16:02:49.776476 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" Mar 20 16:02:49 crc kubenswrapper[4730]: I0320 16:02:49.776976 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fc78da88-5699-44ed-af14-7627ea6191f9" containerName="nova-api-api" containerID="cri-o://b7766e5f23e174441651086dc567c68965531e447de145ebeb0b1414c28ef4e6" gracePeriod=30 Mar 20 16:02:49 crc kubenswrapper[4730]: I0320 16:02:49.810596 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" podStartSLOduration=3.81057767 podStartE2EDuration="3.81057767s" podCreationTimestamp="2026-03-20 16:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:02:49.798208094 +0000 UTC m=+1429.011579463" watchObservedRunningTime="2026-03-20 16:02:49.81057767 +0000 UTC m=+1429.023949039" Mar 20 16:02:49 crc kubenswrapper[4730]: I0320 16:02:49.975706 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:02:49 crc kubenswrapper[4730]: I0320 16:02:49.976343 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c192a384-369a-4011-bbd0-af10cf958010" containerName="ceilometer-central-agent" containerID="cri-o://92ca70d767554dd7f5a3c2261603595e47200f286bd84d37815a7187cc55a125" gracePeriod=30 Mar 20 16:02:49 crc kubenswrapper[4730]: I0320 16:02:49.976385 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c192a384-369a-4011-bbd0-af10cf958010" containerName="proxy-httpd" containerID="cri-o://2e40439ca8128c74c4bde82b12fa3c90a57c5ca9aa2ec703f4bb34d1783c4a41" gracePeriod=30 Mar 20 16:02:49 crc kubenswrapper[4730]: I0320 16:02:49.976418 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c192a384-369a-4011-bbd0-af10cf958010" containerName="sg-core" containerID="cri-o://67e70a210d20ab3fb6eb013ac35769c758e3dbe555ae4869a1f308e1ce50d12f" gracePeriod=30 Mar 20 16:02:49 crc kubenswrapper[4730]: I0320 16:02:49.976403 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c192a384-369a-4011-bbd0-af10cf958010" containerName="ceilometer-notification-agent" containerID="cri-o://3ed2c729cfba6a845986581f09c8deca053ba44823152224340ca15c6f3b1018" gracePeriod=30 Mar 20 16:02:50 crc kubenswrapper[4730]: I0320 16:02:50.789584 4730 generic.go:334] "Generic (PLEG): container finished" podID="c192a384-369a-4011-bbd0-af10cf958010" containerID="2e40439ca8128c74c4bde82b12fa3c90a57c5ca9aa2ec703f4bb34d1783c4a41" exitCode=0 Mar 20 16:02:50 crc kubenswrapper[4730]: I0320 16:02:50.790045 4730 generic.go:334] "Generic (PLEG): container finished" podID="c192a384-369a-4011-bbd0-af10cf958010" containerID="67e70a210d20ab3fb6eb013ac35769c758e3dbe555ae4869a1f308e1ce50d12f" exitCode=2 Mar 20 16:02:50 crc kubenswrapper[4730]: I0320 16:02:50.790104 4730 generic.go:334] "Generic (PLEG): container finished" podID="c192a384-369a-4011-bbd0-af10cf958010" containerID="92ca70d767554dd7f5a3c2261603595e47200f286bd84d37815a7187cc55a125" exitCode=0 Mar 20 16:02:50 crc kubenswrapper[4730]: I0320 16:02:50.789695 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c192a384-369a-4011-bbd0-af10cf958010","Type":"ContainerDied","Data":"2e40439ca8128c74c4bde82b12fa3c90a57c5ca9aa2ec703f4bb34d1783c4a41"} Mar 20 16:02:50 crc kubenswrapper[4730]: I0320 16:02:50.790381 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c192a384-369a-4011-bbd0-af10cf958010","Type":"ContainerDied","Data":"67e70a210d20ab3fb6eb013ac35769c758e3dbe555ae4869a1f308e1ce50d12f"} Mar 20 16:02:50 crc kubenswrapper[4730]: I0320 16:02:50.790463 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c192a384-369a-4011-bbd0-af10cf958010","Type":"ContainerDied","Data":"92ca70d767554dd7f5a3c2261603595e47200f286bd84d37815a7187cc55a125"} Mar 20 16:02:50 crc kubenswrapper[4730]: I0320 16:02:50.796732 4730 generic.go:334] "Generic (PLEG): container finished" podID="fc78da88-5699-44ed-af14-7627ea6191f9" containerID="42f9304ba2a3640bc2a608f72633df51d75830859b8b512c72f883ee6ebbc449" exitCode=143 Mar 20 16:02:50 crc kubenswrapper[4730]: I0320 16:02:50.796804 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc78da88-5699-44ed-af14-7627ea6191f9","Type":"ContainerDied","Data":"42f9304ba2a3640bc2a608f72633df51d75830859b8b512c72f883ee6ebbc449"} Mar 20 16:02:51 crc kubenswrapper[4730]: I0320 16:02:51.136202 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:51 crc kubenswrapper[4730]: I0320 16:02:51.861725 4730 generic.go:334] "Generic (PLEG): container finished" podID="fc78da88-5699-44ed-af14-7627ea6191f9" containerID="b7766e5f23e174441651086dc567c68965531e447de145ebeb0b1414c28ef4e6" exitCode=0 Mar 20 16:02:51 crc kubenswrapper[4730]: I0320 16:02:51.862005 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc78da88-5699-44ed-af14-7627ea6191f9","Type":"ContainerDied","Data":"b7766e5f23e174441651086dc567c68965531e447de145ebeb0b1414c28ef4e6"} Mar 20 16:02:51 crc kubenswrapper[4730]: E0320 16:02:51.987371 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod279d2368_abe1_465a_9007_68542e5dbfc4.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod279d2368_abe1_465a_9007_68542e5dbfc4.slice/crio-da2c214fcdb33fae608237a0fb1d01559481f0b67a4afa1e6c930298a64b75a2\": RecentStats: unable to find data in memory cache]" Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.029900 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.132938 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm4nm\" (UniqueName: \"kubernetes.io/projected/fc78da88-5699-44ed-af14-7627ea6191f9-kube-api-access-gm4nm\") pod \"fc78da88-5699-44ed-af14-7627ea6191f9\" (UID: \"fc78da88-5699-44ed-af14-7627ea6191f9\") " Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.133076 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc78da88-5699-44ed-af14-7627ea6191f9-config-data\") pod \"fc78da88-5699-44ed-af14-7627ea6191f9\" (UID: \"fc78da88-5699-44ed-af14-7627ea6191f9\") " Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.133109 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc78da88-5699-44ed-af14-7627ea6191f9-combined-ca-bundle\") pod \"fc78da88-5699-44ed-af14-7627ea6191f9\" (UID: \"fc78da88-5699-44ed-af14-7627ea6191f9\") " Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.133187 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc78da88-5699-44ed-af14-7627ea6191f9-logs\") pod \"fc78da88-5699-44ed-af14-7627ea6191f9\" (UID: \"fc78da88-5699-44ed-af14-7627ea6191f9\") " Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.135436 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc78da88-5699-44ed-af14-7627ea6191f9-logs" (OuterVolumeSpecName: "logs") pod "fc78da88-5699-44ed-af14-7627ea6191f9" (UID: "fc78da88-5699-44ed-af14-7627ea6191f9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.145505 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc78da88-5699-44ed-af14-7627ea6191f9-kube-api-access-gm4nm" (OuterVolumeSpecName: "kube-api-access-gm4nm") pod "fc78da88-5699-44ed-af14-7627ea6191f9" (UID: "fc78da88-5699-44ed-af14-7627ea6191f9"). InnerVolumeSpecName "kube-api-access-gm4nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.179333 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc78da88-5699-44ed-af14-7627ea6191f9-config-data" (OuterVolumeSpecName: "config-data") pod "fc78da88-5699-44ed-af14-7627ea6191f9" (UID: "fc78da88-5699-44ed-af14-7627ea6191f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.184407 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc78da88-5699-44ed-af14-7627ea6191f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc78da88-5699-44ed-af14-7627ea6191f9" (UID: "fc78da88-5699-44ed-af14-7627ea6191f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.235770 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc78da88-5699-44ed-af14-7627ea6191f9-logs\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.235818 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm4nm\" (UniqueName: \"kubernetes.io/projected/fc78da88-5699-44ed-af14-7627ea6191f9-kube-api-access-gm4nm\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.235833 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc78da88-5699-44ed-af14-7627ea6191f9-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.235846 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc78da88-5699-44ed-af14-7627ea6191f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.897568 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.908639 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc78da88-5699-44ed-af14-7627ea6191f9","Type":"ContainerDied","Data":"bcfa21becb5d8093cfdbab459412302a96e0a7cd61a8f2880830161d043b5f7d"} Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.908699 4730 scope.go:117] "RemoveContainer" containerID="b7766e5f23e174441651086dc567c68965531e447de145ebeb0b1414c28ef4e6" Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.908871 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.936630 4730 generic.go:334] "Generic (PLEG): container finished" podID="c192a384-369a-4011-bbd0-af10cf958010" containerID="3ed2c729cfba6a845986581f09c8deca053ba44823152224340ca15c6f3b1018" exitCode=0 Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.936674 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c192a384-369a-4011-bbd0-af10cf958010","Type":"ContainerDied","Data":"3ed2c729cfba6a845986581f09c8deca053ba44823152224340ca15c6f3b1018"} Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.936703 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c192a384-369a-4011-bbd0-af10cf958010","Type":"ContainerDied","Data":"fc7185d335545d2bc3730a5a21059470bf6bc2bcae0a41d7ca4c2027aa011079"} Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.936862 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.984508 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c192a384-369a-4011-bbd0-af10cf958010-log-httpd\") pod \"c192a384-369a-4011-bbd0-af10cf958010\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.984638 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c192a384-369a-4011-bbd0-af10cf958010-sg-core-conf-yaml\") pod \"c192a384-369a-4011-bbd0-af10cf958010\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.984678 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c192a384-369a-4011-bbd0-af10cf958010-run-httpd\") pod \"c192a384-369a-4011-bbd0-af10cf958010\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.984702 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c192a384-369a-4011-bbd0-af10cf958010-config-data\") pod \"c192a384-369a-4011-bbd0-af10cf958010\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.984742 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c192a384-369a-4011-bbd0-af10cf958010-combined-ca-bundle\") pod \"c192a384-369a-4011-bbd0-af10cf958010\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.984786 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c192a384-369a-4011-bbd0-af10cf958010-scripts\") pod \"c192a384-369a-4011-bbd0-af10cf958010\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.984889 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tkk7\" (UniqueName: \"kubernetes.io/projected/c192a384-369a-4011-bbd0-af10cf958010-kube-api-access-2tkk7\") pod \"c192a384-369a-4011-bbd0-af10cf958010\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.985481 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c192a384-369a-4011-bbd0-af10cf958010-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c192a384-369a-4011-bbd0-af10cf958010" (UID: "c192a384-369a-4011-bbd0-af10cf958010"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.988732 4730 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c192a384-369a-4011-bbd0-af10cf958010-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.990450 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c192a384-369a-4011-bbd0-af10cf958010-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c192a384-369a-4011-bbd0-af10cf958010" (UID: "c192a384-369a-4011-bbd0-af10cf958010"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.992409 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c192a384-369a-4011-bbd0-af10cf958010-scripts" (OuterVolumeSpecName: "scripts") pod "c192a384-369a-4011-bbd0-af10cf958010" (UID: "c192a384-369a-4011-bbd0-af10cf958010"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.995796 4730 scope.go:117] "RemoveContainer" containerID="42f9304ba2a3640bc2a608f72633df51d75830859b8b512c72f883ee6ebbc449" Mar 20 16:02:52 crc kubenswrapper[4730]: I0320 16:02:52.996385 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c192a384-369a-4011-bbd0-af10cf958010-kube-api-access-2tkk7" (OuterVolumeSpecName: "kube-api-access-2tkk7") pod "c192a384-369a-4011-bbd0-af10cf958010" (UID: "c192a384-369a-4011-bbd0-af10cf958010"). InnerVolumeSpecName "kube-api-access-2tkk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.083233 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.091191 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tkk7\" (UniqueName: \"kubernetes.io/projected/c192a384-369a-4011-bbd0-af10cf958010-kube-api-access-2tkk7\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.091226 4730 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c192a384-369a-4011-bbd0-af10cf958010-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.091240 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c192a384-369a-4011-bbd0-af10cf958010-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.094481 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.095674 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c192a384-369a-4011-bbd0-af10cf958010-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c192a384-369a-4011-bbd0-af10cf958010" (UID: "c192a384-369a-4011-bbd0-af10cf958010"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.113887 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 20 16:02:53 crc kubenswrapper[4730]: E0320 16:02:53.114370 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc78da88-5699-44ed-af14-7627ea6191f9" containerName="nova-api-log" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.114387 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc78da88-5699-44ed-af14-7627ea6191f9" containerName="nova-api-log" Mar 20 16:02:53 crc kubenswrapper[4730]: E0320 16:02:53.114413 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c192a384-369a-4011-bbd0-af10cf958010" containerName="proxy-httpd" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.114421 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c192a384-369a-4011-bbd0-af10cf958010" containerName="proxy-httpd" Mar 20 16:02:53 crc kubenswrapper[4730]: E0320 16:02:53.114437 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c192a384-369a-4011-bbd0-af10cf958010" containerName="ceilometer-notification-agent" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.114442 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c192a384-369a-4011-bbd0-af10cf958010" containerName="ceilometer-notification-agent" Mar 20 16:02:53 crc kubenswrapper[4730]: E0320 16:02:53.114455 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc78da88-5699-44ed-af14-7627ea6191f9" containerName="nova-api-api" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.114461 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc78da88-5699-44ed-af14-7627ea6191f9" containerName="nova-api-api" Mar 20 16:02:53 crc kubenswrapper[4730]: E0320 16:02:53.114470 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c192a384-369a-4011-bbd0-af10cf958010" containerName="ceilometer-central-agent" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.114476 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c192a384-369a-4011-bbd0-af10cf958010" containerName="ceilometer-central-agent" Mar 20 16:02:53 crc kubenswrapper[4730]: E0320 16:02:53.114492 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c192a384-369a-4011-bbd0-af10cf958010" containerName="sg-core" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.114497 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c192a384-369a-4011-bbd0-af10cf958010" containerName="sg-core" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.114662 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c192a384-369a-4011-bbd0-af10cf958010" containerName="ceilometer-notification-agent" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.114675 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc78da88-5699-44ed-af14-7627ea6191f9" containerName="nova-api-api" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.114686 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc78da88-5699-44ed-af14-7627ea6191f9" containerName="nova-api-log" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.114704 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c192a384-369a-4011-bbd0-af10cf958010" containerName="proxy-httpd" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.114714 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c192a384-369a-4011-bbd0-af10cf958010" containerName="sg-core" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.114721 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c192a384-369a-4011-bbd0-af10cf958010" containerName="ceilometer-central-agent" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.115770 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.121203 4730 scope.go:117] "RemoveContainer" containerID="2e40439ca8128c74c4bde82b12fa3c90a57c5ca9aa2ec703f4bb34d1783c4a41" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.121664 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.121691 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.123566 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.130983 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.166756 4730 scope.go:117] "RemoveContainer" containerID="67e70a210d20ab3fb6eb013ac35769c758e3dbe555ae4869a1f308e1ce50d12f" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.172361 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c192a384-369a-4011-bbd0-af10cf958010-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c192a384-369a-4011-bbd0-af10cf958010" (UID: "c192a384-369a-4011-bbd0-af10cf958010"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.189104 4730 scope.go:117] "RemoveContainer" containerID="3ed2c729cfba6a845986581f09c8deca053ba44823152224340ca15c6f3b1018" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.191645 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c192a384-369a-4011-bbd0-af10cf958010-config-data" (OuterVolumeSpecName: "config-data") pod "c192a384-369a-4011-bbd0-af10cf958010" (UID: "c192a384-369a-4011-bbd0-af10cf958010"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.192199 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c192a384-369a-4011-bbd0-af10cf958010-config-data\") pod \"c192a384-369a-4011-bbd0-af10cf958010\" (UID: \"c192a384-369a-4011-bbd0-af10cf958010\") " Mar 20 16:02:53 crc kubenswrapper[4730]: W0320 16:02:53.192393 4730 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/c192a384-369a-4011-bbd0-af10cf958010/volumes/kubernetes.io~secret/config-data Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.192426 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c192a384-369a-4011-bbd0-af10cf958010-config-data" (OuterVolumeSpecName: "config-data") pod "c192a384-369a-4011-bbd0-af10cf958010" (UID: "c192a384-369a-4011-bbd0-af10cf958010"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.192860 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a875c7b4-22fb-4b91-803c-09a7a439aea1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a875c7b4-22fb-4b91-803c-09a7a439aea1\") " pod="openstack/nova-api-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.192897 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a875c7b4-22fb-4b91-803c-09a7a439aea1-config-data\") pod \"nova-api-0\" (UID: \"a875c7b4-22fb-4b91-803c-09a7a439aea1\") " pod="openstack/nova-api-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.192954 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a875c7b4-22fb-4b91-803c-09a7a439aea1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a875c7b4-22fb-4b91-803c-09a7a439aea1\") " pod="openstack/nova-api-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.193067 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf45d\" (UniqueName: \"kubernetes.io/projected/a875c7b4-22fb-4b91-803c-09a7a439aea1-kube-api-access-cf45d\") pod \"nova-api-0\" (UID: \"a875c7b4-22fb-4b91-803c-09a7a439aea1\") " pod="openstack/nova-api-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.193135 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a875c7b4-22fb-4b91-803c-09a7a439aea1-public-tls-certs\") pod \"nova-api-0\" (UID: \"a875c7b4-22fb-4b91-803c-09a7a439aea1\") " pod="openstack/nova-api-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.193230 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a875c7b4-22fb-4b91-803c-09a7a439aea1-logs\") pod \"nova-api-0\" (UID: \"a875c7b4-22fb-4b91-803c-09a7a439aea1\") " pod="openstack/nova-api-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.193455 4730 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c192a384-369a-4011-bbd0-af10cf958010-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.193471 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c192a384-369a-4011-bbd0-af10cf958010-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.193480 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c192a384-369a-4011-bbd0-af10cf958010-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.214060 4730 scope.go:117] "RemoveContainer" containerID="92ca70d767554dd7f5a3c2261603595e47200f286bd84d37815a7187cc55a125" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.233697 4730 scope.go:117] "RemoveContainer" containerID="2e40439ca8128c74c4bde82b12fa3c90a57c5ca9aa2ec703f4bb34d1783c4a41" Mar 20 16:02:53 crc kubenswrapper[4730]: E0320 16:02:53.234366 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e40439ca8128c74c4bde82b12fa3c90a57c5ca9aa2ec703f4bb34d1783c4a41\": container with ID starting with 2e40439ca8128c74c4bde82b12fa3c90a57c5ca9aa2ec703f4bb34d1783c4a41 not found: ID does not exist" containerID="2e40439ca8128c74c4bde82b12fa3c90a57c5ca9aa2ec703f4bb34d1783c4a41" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.234399 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e40439ca8128c74c4bde82b12fa3c90a57c5ca9aa2ec703f4bb34d1783c4a41"} err="failed to get container status \"2e40439ca8128c74c4bde82b12fa3c90a57c5ca9aa2ec703f4bb34d1783c4a41\": rpc error: code = NotFound desc = could not find container \"2e40439ca8128c74c4bde82b12fa3c90a57c5ca9aa2ec703f4bb34d1783c4a41\": container with ID starting with 2e40439ca8128c74c4bde82b12fa3c90a57c5ca9aa2ec703f4bb34d1783c4a41 not found: ID does not exist" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.234420 4730 scope.go:117] "RemoveContainer" containerID="67e70a210d20ab3fb6eb013ac35769c758e3dbe555ae4869a1f308e1ce50d12f" Mar 20 16:02:53 crc kubenswrapper[4730]: E0320 16:02:53.234822 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67e70a210d20ab3fb6eb013ac35769c758e3dbe555ae4869a1f308e1ce50d12f\": container with ID starting with 67e70a210d20ab3fb6eb013ac35769c758e3dbe555ae4869a1f308e1ce50d12f not found: ID does not exist" containerID="67e70a210d20ab3fb6eb013ac35769c758e3dbe555ae4869a1f308e1ce50d12f" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.234865 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67e70a210d20ab3fb6eb013ac35769c758e3dbe555ae4869a1f308e1ce50d12f"} err="failed to get container status \"67e70a210d20ab3fb6eb013ac35769c758e3dbe555ae4869a1f308e1ce50d12f\": rpc error: code = NotFound desc = could not find container \"67e70a210d20ab3fb6eb013ac35769c758e3dbe555ae4869a1f308e1ce50d12f\": container with ID starting with 67e70a210d20ab3fb6eb013ac35769c758e3dbe555ae4869a1f308e1ce50d12f not found: ID does not exist" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.234897 4730 scope.go:117] "RemoveContainer" containerID="3ed2c729cfba6a845986581f09c8deca053ba44823152224340ca15c6f3b1018" Mar 20 16:02:53 crc kubenswrapper[4730]: E0320 16:02:53.235279 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ed2c729cfba6a845986581f09c8deca053ba44823152224340ca15c6f3b1018\": container with ID starting with 3ed2c729cfba6a845986581f09c8deca053ba44823152224340ca15c6f3b1018 not found: ID does not exist" containerID="3ed2c729cfba6a845986581f09c8deca053ba44823152224340ca15c6f3b1018" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.235313 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ed2c729cfba6a845986581f09c8deca053ba44823152224340ca15c6f3b1018"} err="failed to get container status \"3ed2c729cfba6a845986581f09c8deca053ba44823152224340ca15c6f3b1018\": rpc error: code = NotFound desc = could not find container \"3ed2c729cfba6a845986581f09c8deca053ba44823152224340ca15c6f3b1018\": container with ID starting with 3ed2c729cfba6a845986581f09c8deca053ba44823152224340ca15c6f3b1018 not found: ID does not exist" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.235334 4730 scope.go:117] "RemoveContainer" containerID="92ca70d767554dd7f5a3c2261603595e47200f286bd84d37815a7187cc55a125" Mar 20 16:02:53 crc kubenswrapper[4730]: E0320 16:02:53.235629 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92ca70d767554dd7f5a3c2261603595e47200f286bd84d37815a7187cc55a125\": container with ID starting with 92ca70d767554dd7f5a3c2261603595e47200f286bd84d37815a7187cc55a125 not found: ID does not exist" containerID="92ca70d767554dd7f5a3c2261603595e47200f286bd84d37815a7187cc55a125" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.235671 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92ca70d767554dd7f5a3c2261603595e47200f286bd84d37815a7187cc55a125"} err="failed to get container status \"92ca70d767554dd7f5a3c2261603595e47200f286bd84d37815a7187cc55a125\": rpc error: code = NotFound desc = could not find container \"92ca70d767554dd7f5a3c2261603595e47200f286bd84d37815a7187cc55a125\": container with ID starting with 92ca70d767554dd7f5a3c2261603595e47200f286bd84d37815a7187cc55a125 not found: ID does not exist" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.275174 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.286705 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.297453 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a875c7b4-22fb-4b91-803c-09a7a439aea1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a875c7b4-22fb-4b91-803c-09a7a439aea1\") " pod="openstack/nova-api-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.297511 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf45d\" (UniqueName: \"kubernetes.io/projected/a875c7b4-22fb-4b91-803c-09a7a439aea1-kube-api-access-cf45d\") pod \"nova-api-0\" (UID: \"a875c7b4-22fb-4b91-803c-09a7a439aea1\") " pod="openstack/nova-api-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.297538 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a875c7b4-22fb-4b91-803c-09a7a439aea1-public-tls-certs\") pod \"nova-api-0\" (UID: \"a875c7b4-22fb-4b91-803c-09a7a439aea1\") " pod="openstack/nova-api-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.297571 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a875c7b4-22fb-4b91-803c-09a7a439aea1-logs\") pod \"nova-api-0\" (UID: \"a875c7b4-22fb-4b91-803c-09a7a439aea1\") " pod="openstack/nova-api-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.297654 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a875c7b4-22fb-4b91-803c-09a7a439aea1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a875c7b4-22fb-4b91-803c-09a7a439aea1\") " pod="openstack/nova-api-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.297676 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a875c7b4-22fb-4b91-803c-09a7a439aea1-config-data\") pod \"nova-api-0\" (UID: \"a875c7b4-22fb-4b91-803c-09a7a439aea1\") " pod="openstack/nova-api-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.298880 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a875c7b4-22fb-4b91-803c-09a7a439aea1-logs\") pod \"nova-api-0\" (UID: \"a875c7b4-22fb-4b91-803c-09a7a439aea1\") " pod="openstack/nova-api-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.301368 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a875c7b4-22fb-4b91-803c-09a7a439aea1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a875c7b4-22fb-4b91-803c-09a7a439aea1\") " pod="openstack/nova-api-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.301548 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a875c7b4-22fb-4b91-803c-09a7a439aea1-config-data\") pod \"nova-api-0\" (UID: \"a875c7b4-22fb-4b91-803c-09a7a439aea1\") " pod="openstack/nova-api-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.302485 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a875c7b4-22fb-4b91-803c-09a7a439aea1-public-tls-certs\") pod \"nova-api-0\" (UID: \"a875c7b4-22fb-4b91-803c-09a7a439aea1\") " pod="openstack/nova-api-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.302511 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a875c7b4-22fb-4b91-803c-09a7a439aea1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a875c7b4-22fb-4b91-803c-09a7a439aea1\") " pod="openstack/nova-api-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.316751 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.325995 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf45d\" (UniqueName: \"kubernetes.io/projected/a875c7b4-22fb-4b91-803c-09a7a439aea1-kube-api-access-cf45d\") pod \"nova-api-0\" (UID: \"a875c7b4-22fb-4b91-803c-09a7a439aea1\") " pod="openstack/nova-api-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.326389 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.332186 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.332386 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.356710 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.399518 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfa18b95-4067-42bf-82ac-ace10629e4bf-log-httpd\") pod \"ceilometer-0\" (UID: \"bfa18b95-4067-42bf-82ac-ace10629e4bf\") " pod="openstack/ceilometer-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.399593 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa18b95-4067-42bf-82ac-ace10629e4bf-config-data\") pod \"ceilometer-0\" (UID: \"bfa18b95-4067-42bf-82ac-ace10629e4bf\") " pod="openstack/ceilometer-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.399641 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfa18b95-4067-42bf-82ac-ace10629e4bf-run-httpd\") pod \"ceilometer-0\" (UID: \"bfa18b95-4067-42bf-82ac-ace10629e4bf\") " pod="openstack/ceilometer-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.399706 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfa18b95-4067-42bf-82ac-ace10629e4bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bfa18b95-4067-42bf-82ac-ace10629e4bf\") " pod="openstack/ceilometer-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.399734 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfa18b95-4067-42bf-82ac-ace10629e4bf-scripts\") pod \"ceilometer-0\" (UID: \"bfa18b95-4067-42bf-82ac-ace10629e4bf\") " pod="openstack/ceilometer-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.399763 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa18b95-4067-42bf-82ac-ace10629e4bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bfa18b95-4067-42bf-82ac-ace10629e4bf\") " pod="openstack/ceilometer-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.399890 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvg5x\" (UniqueName: \"kubernetes.io/projected/bfa18b95-4067-42bf-82ac-ace10629e4bf-kube-api-access-vvg5x\") pod \"ceilometer-0\" (UID: \"bfa18b95-4067-42bf-82ac-ace10629e4bf\") " pod="openstack/ceilometer-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.433079 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.491734 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:02:53 crc kubenswrapper[4730]: E0320 16:02:53.492607 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-vvg5x log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="bfa18b95-4067-42bf-82ac-ace10629e4bf" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.501969 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfa18b95-4067-42bf-82ac-ace10629e4bf-run-httpd\") pod \"ceilometer-0\" (UID: \"bfa18b95-4067-42bf-82ac-ace10629e4bf\") " pod="openstack/ceilometer-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.502041 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfa18b95-4067-42bf-82ac-ace10629e4bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bfa18b95-4067-42bf-82ac-ace10629e4bf\") " pod="openstack/ceilometer-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.502071 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfa18b95-4067-42bf-82ac-ace10629e4bf-scripts\") pod \"ceilometer-0\" (UID: \"bfa18b95-4067-42bf-82ac-ace10629e4bf\") " pod="openstack/ceilometer-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.502097 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa18b95-4067-42bf-82ac-ace10629e4bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bfa18b95-4067-42bf-82ac-ace10629e4bf\") " pod="openstack/ceilometer-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.502232 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvg5x\" (UniqueName: \"kubernetes.io/projected/bfa18b95-4067-42bf-82ac-ace10629e4bf-kube-api-access-vvg5x\") pod \"ceilometer-0\" (UID: \"bfa18b95-4067-42bf-82ac-ace10629e4bf\") " pod="openstack/ceilometer-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.502319 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfa18b95-4067-42bf-82ac-ace10629e4bf-log-httpd\") pod \"ceilometer-0\" (UID: \"bfa18b95-4067-42bf-82ac-ace10629e4bf\") " pod="openstack/ceilometer-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.502365 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa18b95-4067-42bf-82ac-ace10629e4bf-config-data\") pod \"ceilometer-0\" (UID: \"bfa18b95-4067-42bf-82ac-ace10629e4bf\") " pod="openstack/ceilometer-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.502899 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfa18b95-4067-42bf-82ac-ace10629e4bf-log-httpd\") pod \"ceilometer-0\" (UID: \"bfa18b95-4067-42bf-82ac-ace10629e4bf\") " pod="openstack/ceilometer-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.503119 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfa18b95-4067-42bf-82ac-ace10629e4bf-run-httpd\") pod \"ceilometer-0\" (UID: \"bfa18b95-4067-42bf-82ac-ace10629e4bf\") " pod="openstack/ceilometer-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.505890 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfa18b95-4067-42bf-82ac-ace10629e4bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bfa18b95-4067-42bf-82ac-ace10629e4bf\") " pod="openstack/ceilometer-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.506356 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa18b95-4067-42bf-82ac-ace10629e4bf-config-data\") pod \"ceilometer-0\" (UID: \"bfa18b95-4067-42bf-82ac-ace10629e4bf\") " pod="openstack/ceilometer-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.507932 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfa18b95-4067-42bf-82ac-ace10629e4bf-scripts\") pod \"ceilometer-0\" (UID: \"bfa18b95-4067-42bf-82ac-ace10629e4bf\") " pod="openstack/ceilometer-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.511863 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa18b95-4067-42bf-82ac-ace10629e4bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bfa18b95-4067-42bf-82ac-ace10629e4bf\") " pod="openstack/ceilometer-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.525952 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvg5x\" (UniqueName: \"kubernetes.io/projected/bfa18b95-4067-42bf-82ac-ace10629e4bf-kube-api-access-vvg5x\") pod \"ceilometer-0\" (UID: \"bfa18b95-4067-42bf-82ac-ace10629e4bf\") " pod="openstack/ceilometer-0" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.549845 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c192a384-369a-4011-bbd0-af10cf958010" path="/var/lib/kubelet/pods/c192a384-369a-4011-bbd0-af10cf958010/volumes" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.550610 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc78da88-5699-44ed-af14-7627ea6191f9" path="/var/lib/kubelet/pods/fc78da88-5699-44ed-af14-7627ea6191f9/volumes" Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.551207 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.551437 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="4938ac0e-1226-4f20-8f23-763b62b863c4" containerName="kube-state-metrics" containerID="cri-o://49b97730a0d8925f4b80deb7db0aeea30943ec9b5135c0345bbaf48837573e5f" gracePeriod=30 Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.958046 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 16:02:53 crc kubenswrapper[4730]: W0320 16:02:53.962127 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda875c7b4_22fb_4b91_803c_09a7a439aea1.slice/crio-b02119858ae3b92ae86beccde670787e102f4edc1a93a7bf0ae26e53cf88792e WatchSource:0}: Error finding container b02119858ae3b92ae86beccde670787e102f4edc1a93a7bf0ae26e53cf88792e: Status 404 returned error can't find the container with id b02119858ae3b92ae86beccde670787e102f4edc1a93a7bf0ae26e53cf88792e Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.969697 4730 generic.go:334] "Generic (PLEG): container finished" podID="4938ac0e-1226-4f20-8f23-763b62b863c4" containerID="49b97730a0d8925f4b80deb7db0aeea30943ec9b5135c0345bbaf48837573e5f" exitCode=2 Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.970052 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4938ac0e-1226-4f20-8f23-763b62b863c4","Type":"ContainerDied","Data":"49b97730a0d8925f4b80deb7db0aeea30943ec9b5135c0345bbaf48837573e5f"} Mar 20 16:02:53 crc kubenswrapper[4730]: I0320 16:02:53.978429 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.078581 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.082863 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.095906 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.157594 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.224045 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa18b95-4067-42bf-82ac-ace10629e4bf-config-data\") pod \"bfa18b95-4067-42bf-82ac-ace10629e4bf\" (UID: \"bfa18b95-4067-42bf-82ac-ace10629e4bf\") " Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.224118 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa18b95-4067-42bf-82ac-ace10629e4bf-combined-ca-bundle\") pod \"bfa18b95-4067-42bf-82ac-ace10629e4bf\" (UID: \"bfa18b95-4067-42bf-82ac-ace10629e4bf\") " Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.224150 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfa18b95-4067-42bf-82ac-ace10629e4bf-scripts\") pod \"bfa18b95-4067-42bf-82ac-ace10629e4bf\" (UID: \"bfa18b95-4067-42bf-82ac-ace10629e4bf\") " Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.224166 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfa18b95-4067-42bf-82ac-ace10629e4bf-sg-core-conf-yaml\") pod \"bfa18b95-4067-42bf-82ac-ace10629e4bf\" (UID: \"bfa18b95-4067-42bf-82ac-ace10629e4bf\") " Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.224277 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvg5x\" (UniqueName: \"kubernetes.io/projected/bfa18b95-4067-42bf-82ac-ace10629e4bf-kube-api-access-vvg5x\") pod \"bfa18b95-4067-42bf-82ac-ace10629e4bf\" (UID: \"bfa18b95-4067-42bf-82ac-ace10629e4bf\") " Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.224323 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfa18b95-4067-42bf-82ac-ace10629e4bf-log-httpd\") pod \"bfa18b95-4067-42bf-82ac-ace10629e4bf\" (UID: \"bfa18b95-4067-42bf-82ac-ace10629e4bf\") " Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.224340 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6bgz\" (UniqueName: \"kubernetes.io/projected/4938ac0e-1226-4f20-8f23-763b62b863c4-kube-api-access-d6bgz\") pod \"4938ac0e-1226-4f20-8f23-763b62b863c4\" (UID: \"4938ac0e-1226-4f20-8f23-763b62b863c4\") " Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.224367 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfa18b95-4067-42bf-82ac-ace10629e4bf-run-httpd\") pod \"bfa18b95-4067-42bf-82ac-ace10629e4bf\" (UID: \"bfa18b95-4067-42bf-82ac-ace10629e4bf\") " Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.226377 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfa18b95-4067-42bf-82ac-ace10629e4bf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bfa18b95-4067-42bf-82ac-ace10629e4bf" (UID: "bfa18b95-4067-42bf-82ac-ace10629e4bf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.233654 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfa18b95-4067-42bf-82ac-ace10629e4bf-kube-api-access-vvg5x" (OuterVolumeSpecName: "kube-api-access-vvg5x") pod "bfa18b95-4067-42bf-82ac-ace10629e4bf" (UID: "bfa18b95-4067-42bf-82ac-ace10629e4bf"). InnerVolumeSpecName "kube-api-access-vvg5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.235677 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa18b95-4067-42bf-82ac-ace10629e4bf-config-data" (OuterVolumeSpecName: "config-data") pod "bfa18b95-4067-42bf-82ac-ace10629e4bf" (UID: "bfa18b95-4067-42bf-82ac-ace10629e4bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.237029 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfa18b95-4067-42bf-82ac-ace10629e4bf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bfa18b95-4067-42bf-82ac-ace10629e4bf" (UID: "bfa18b95-4067-42bf-82ac-ace10629e4bf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.237686 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa18b95-4067-42bf-82ac-ace10629e4bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfa18b95-4067-42bf-82ac-ace10629e4bf" (UID: "bfa18b95-4067-42bf-82ac-ace10629e4bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.238788 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4938ac0e-1226-4f20-8f23-763b62b863c4-kube-api-access-d6bgz" (OuterVolumeSpecName: "kube-api-access-d6bgz") pod "4938ac0e-1226-4f20-8f23-763b62b863c4" (UID: "4938ac0e-1226-4f20-8f23-763b62b863c4"). InnerVolumeSpecName "kube-api-access-d6bgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.239572 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa18b95-4067-42bf-82ac-ace10629e4bf-scripts" (OuterVolumeSpecName: "scripts") pod "bfa18b95-4067-42bf-82ac-ace10629e4bf" (UID: "bfa18b95-4067-42bf-82ac-ace10629e4bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.244866 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa18b95-4067-42bf-82ac-ace10629e4bf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bfa18b95-4067-42bf-82ac-ace10629e4bf" (UID: "bfa18b95-4067-42bf-82ac-ace10629e4bf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.326552 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa18b95-4067-42bf-82ac-ace10629e4bf-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.326847 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa18b95-4067-42bf-82ac-ace10629e4bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.326862 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfa18b95-4067-42bf-82ac-ace10629e4bf-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.326874 4730 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfa18b95-4067-42bf-82ac-ace10629e4bf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.326884 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvg5x\" (UniqueName: \"kubernetes.io/projected/bfa18b95-4067-42bf-82ac-ace10629e4bf-kube-api-access-vvg5x\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.326896 4730 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfa18b95-4067-42bf-82ac-ace10629e4bf-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.326906 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6bgz\" (UniqueName: \"kubernetes.io/projected/4938ac0e-1226-4f20-8f23-763b62b863c4-kube-api-access-d6bgz\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.326916 4730 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfa18b95-4067-42bf-82ac-ace10629e4bf-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.989926 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4938ac0e-1226-4f20-8f23-763b62b863c4","Type":"ContainerDied","Data":"7d7a0da71b781876c622a4a6bd339b425ce64e7ebb40d9b345bbc6e39de452eb"} Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.989991 4730 scope.go:117] "RemoveContainer" containerID="49b97730a0d8925f4b80deb7db0aeea30943ec9b5135c0345bbaf48837573e5f" Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.989951 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.992838 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.992836 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a875c7b4-22fb-4b91-803c-09a7a439aea1","Type":"ContainerStarted","Data":"8ad5725e9930431c50bd1862d3f763c6380098c554ce816d54c8f41bdc9481c4"} Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.993893 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.993926 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a875c7b4-22fb-4b91-803c-09a7a439aea1","Type":"ContainerStarted","Data":"5b5163f015343a503b8bb1265422ad47154d41b97a20358ae2eebcfb9b02968f"} Mar 20 16:02:54 crc kubenswrapper[4730]: I0320 16:02:54.993943 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a875c7b4-22fb-4b91-803c-09a7a439aea1","Type":"ContainerStarted","Data":"b02119858ae3b92ae86beccde670787e102f4edc1a93a7bf0ae26e53cf88792e"} Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.040465 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.040441582 podStartE2EDuration="2.040441582s" podCreationTimestamp="2026-03-20 16:02:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:02:55.029547509 +0000 UTC m=+1434.242918878" watchObservedRunningTime="2026-03-20 16:02:55.040441582 +0000 UTC m=+1434.253812951" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.047420 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.108683 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.125315 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.146295 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.162303 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.176680 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:02:55 crc kubenswrapper[4730]: E0320 16:02:55.177161 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4938ac0e-1226-4f20-8f23-763b62b863c4" containerName="kube-state-metrics" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.177177 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="4938ac0e-1226-4f20-8f23-763b62b863c4" containerName="kube-state-metrics" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.177382 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="4938ac0e-1226-4f20-8f23-763b62b863c4" containerName="kube-state-metrics" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.179148 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.181437 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-wwqfb" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.183072 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.183531 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.198388 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.226310 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.227641 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.242668 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.242858 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.266422 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-log-httpd\") pod \"ceilometer-0\" (UID: \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\") " pod="openstack/ceilometer-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.266515 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\") " pod="openstack/ceilometer-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.266614 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-scripts\") pod \"ceilometer-0\" (UID: \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\") " pod="openstack/ceilometer-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.266689 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-run-httpd\") pod \"ceilometer-0\" (UID: \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\") " pod="openstack/ceilometer-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.266708 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-config-data\") pod \"ceilometer-0\" (UID: \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\") " pod="openstack/ceilometer-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.266752 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\") " pod="openstack/ceilometer-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.266803 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddg7t\" (UniqueName: \"kubernetes.io/projected/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-kube-api-access-ddg7t\") pod \"ceilometer-0\" (UID: \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\") " pod="openstack/ceilometer-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.271721 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.368818 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\") " pod="openstack/ceilometer-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.368887 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-scripts\") pod \"ceilometer-0\" (UID: \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\") " pod="openstack/ceilometer-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.368927 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-run-httpd\") pod \"ceilometer-0\" (UID: \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\") " pod="openstack/ceilometer-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.368943 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-config-data\") pod \"ceilometer-0\" (UID: \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\") " pod="openstack/ceilometer-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.368965 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2455b53b-7716-45b9-ac24-cd0bd892fbb9-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"2455b53b-7716-45b9-ac24-cd0bd892fbb9\") " pod="openstack/kube-state-metrics-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.368991 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\") " pod="openstack/ceilometer-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.369050 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddg7t\" (UniqueName: \"kubernetes.io/projected/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-kube-api-access-ddg7t\") pod \"ceilometer-0\" (UID: \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\") " pod="openstack/ceilometer-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.369096 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/2455b53b-7716-45b9-ac24-cd0bd892fbb9-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"2455b53b-7716-45b9-ac24-cd0bd892fbb9\") " pod="openstack/kube-state-metrics-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.369150 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf8lr\" (UniqueName: \"kubernetes.io/projected/2455b53b-7716-45b9-ac24-cd0bd892fbb9-kube-api-access-tf8lr\") pod \"kube-state-metrics-0\" (UID: \"2455b53b-7716-45b9-ac24-cd0bd892fbb9\") " pod="openstack/kube-state-metrics-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.369195 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/2455b53b-7716-45b9-ac24-cd0bd892fbb9-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"2455b53b-7716-45b9-ac24-cd0bd892fbb9\") " pod="openstack/kube-state-metrics-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.369239 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-log-httpd\") pod \"ceilometer-0\" (UID: \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\") " pod="openstack/ceilometer-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.369768 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-log-httpd\") pod \"ceilometer-0\" (UID: \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\") " pod="openstack/ceilometer-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.370206 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-run-httpd\") pod \"ceilometer-0\" (UID: \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\") " pod="openstack/ceilometer-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.374922 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\") " pod="openstack/ceilometer-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.375064 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\") " pod="openstack/ceilometer-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.379802 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-scripts\") pod \"ceilometer-0\" (UID: \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\") " pod="openstack/ceilometer-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.380152 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-config-data\") pod \"ceilometer-0\" (UID: \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\") " pod="openstack/ceilometer-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.388186 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddg7t\" (UniqueName: \"kubernetes.io/projected/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-kube-api-access-ddg7t\") pod \"ceilometer-0\" (UID: \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\") " pod="openstack/ceilometer-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.470690 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2455b53b-7716-45b9-ac24-cd0bd892fbb9-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"2455b53b-7716-45b9-ac24-cd0bd892fbb9\") " pod="openstack/kube-state-metrics-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.470793 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/2455b53b-7716-45b9-ac24-cd0bd892fbb9-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"2455b53b-7716-45b9-ac24-cd0bd892fbb9\") " pod="openstack/kube-state-metrics-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.470839 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf8lr\" (UniqueName: \"kubernetes.io/projected/2455b53b-7716-45b9-ac24-cd0bd892fbb9-kube-api-access-tf8lr\") pod \"kube-state-metrics-0\" (UID: \"2455b53b-7716-45b9-ac24-cd0bd892fbb9\") " pod="openstack/kube-state-metrics-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.470879 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/2455b53b-7716-45b9-ac24-cd0bd892fbb9-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"2455b53b-7716-45b9-ac24-cd0bd892fbb9\") " pod="openstack/kube-state-metrics-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.474714 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/2455b53b-7716-45b9-ac24-cd0bd892fbb9-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"2455b53b-7716-45b9-ac24-cd0bd892fbb9\") " pod="openstack/kube-state-metrics-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.475212 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2455b53b-7716-45b9-ac24-cd0bd892fbb9-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"2455b53b-7716-45b9-ac24-cd0bd892fbb9\") " pod="openstack/kube-state-metrics-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.477076 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/2455b53b-7716-45b9-ac24-cd0bd892fbb9-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"2455b53b-7716-45b9-ac24-cd0bd892fbb9\") " pod="openstack/kube-state-metrics-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.488572 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf8lr\" (UniqueName: \"kubernetes.io/projected/2455b53b-7716-45b9-ac24-cd0bd892fbb9-kube-api-access-tf8lr\") pod \"kube-state-metrics-0\" (UID: \"2455b53b-7716-45b9-ac24-cd0bd892fbb9\") " pod="openstack/kube-state-metrics-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.523084 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.545384 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4938ac0e-1226-4f20-8f23-763b62b863c4" path="/var/lib/kubelet/pods/4938ac0e-1226-4f20-8f23-763b62b863c4/volumes" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.546065 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfa18b95-4067-42bf-82ac-ace10629e4bf" path="/var/lib/kubelet/pods/bfa18b95-4067-42bf-82ac-ace10629e4bf/volumes" Mar 20 16:02:55 crc kubenswrapper[4730]: I0320 16:02:55.587175 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 16:02:56 crc kubenswrapper[4730]: I0320 16:02:56.028725 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:02:56 crc kubenswrapper[4730]: I0320 16:02:56.069628 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:02:56 crc kubenswrapper[4730]: I0320 16:02:56.136542 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:56 crc kubenswrapper[4730]: W0320 16:02:56.147464 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2455b53b_7716_45b9_ac24_cd0bd892fbb9.slice/crio-1e53b5bd2b518ed47526b3f05e87f049e3b47ee7d766a73b95963da9b3a0646a WatchSource:0}: Error finding container 1e53b5bd2b518ed47526b3f05e87f049e3b47ee7d766a73b95963da9b3a0646a: Status 404 returned error can't find the container with id 1e53b5bd2b518ed47526b3f05e87f049e3b47ee7d766a73b95963da9b3a0646a Mar 20 16:02:56 crc kubenswrapper[4730]: I0320 16:02:56.151601 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 16:02:56 crc kubenswrapper[4730]: I0320 16:02:56.157544 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:57 crc kubenswrapper[4730]: I0320 16:02:57.043191 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2455b53b-7716-45b9-ac24-cd0bd892fbb9","Type":"ContainerStarted","Data":"1e53b5bd2b518ed47526b3f05e87f049e3b47ee7d766a73b95963da9b3a0646a"} Mar 20 16:02:57 crc kubenswrapper[4730]: I0320 16:02:57.048662 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8","Type":"ContainerStarted","Data":"60a8e4234dec296e07fa2e6b25c8cd3d8400e9e7059761ba834d2524bf4e7bc9"} Mar 20 16:02:57 crc kubenswrapper[4730]: I0320 16:02:57.048693 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8","Type":"ContainerStarted","Data":"7a8aa45fa61f1cac40b9b89dc3a2959bc5ab43046293be849c2cd09eec70c60a"} Mar 20 16:02:57 crc kubenswrapper[4730]: I0320 16:02:57.048703 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8","Type":"ContainerStarted","Data":"2d283fdd4c1c73728f70c718eab039966732590eed0ec087942b19f09e1c496d"} Mar 20 16:02:57 crc kubenswrapper[4730]: I0320 16:02:57.064454 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 20 16:02:57 crc kubenswrapper[4730]: I0320 16:02:57.269106 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-5j2w4"] Mar 20 16:02:57 crc kubenswrapper[4730]: I0320 16:02:57.270424 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5j2w4" Mar 20 16:02:57 crc kubenswrapper[4730]: I0320 16:02:57.272761 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 20 16:02:57 crc kubenswrapper[4730]: I0320 16:02:57.273067 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 20 16:02:57 crc kubenswrapper[4730]: I0320 16:02:57.299901 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" Mar 20 16:02:57 crc kubenswrapper[4730]: I0320 16:02:57.338342 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5j2w4"] Mar 20 16:02:57 crc kubenswrapper[4730]: I0320 16:02:57.414494 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-654455944c-qph9q"] Mar 20 16:02:57 crc kubenswrapper[4730]: I0320 16:02:57.414720 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-654455944c-qph9q" podUID="84937d37-8276-4014-b1ae-bb84547384af" containerName="dnsmasq-dns" containerID="cri-o://61e03f987a4e7a22fe8153ac2d6c60c19cb72977de582edcb281b2f18aeef951" gracePeriod=10 Mar 20 16:02:57 crc kubenswrapper[4730]: I0320 16:02:57.433930 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6941d556-3020-4344-b185-5d79cf68187c-scripts\") pod \"nova-cell1-cell-mapping-5j2w4\" (UID: \"6941d556-3020-4344-b185-5d79cf68187c\") " pod="openstack/nova-cell1-cell-mapping-5j2w4" Mar 20 16:02:57 crc kubenswrapper[4730]: I0320 16:02:57.434131 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6941d556-3020-4344-b185-5d79cf68187c-config-data\") pod \"nova-cell1-cell-mapping-5j2w4\" (UID: \"6941d556-3020-4344-b185-5d79cf68187c\") " pod="openstack/nova-cell1-cell-mapping-5j2w4" Mar 20 16:02:57 crc kubenswrapper[4730]: I0320 16:02:57.434167 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6rml\" (UniqueName: \"kubernetes.io/projected/6941d556-3020-4344-b185-5d79cf68187c-kube-api-access-p6rml\") pod \"nova-cell1-cell-mapping-5j2w4\" (UID: \"6941d556-3020-4344-b185-5d79cf68187c\") " pod="openstack/nova-cell1-cell-mapping-5j2w4" Mar 20 16:02:57 crc kubenswrapper[4730]: I0320 16:02:57.434240 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6941d556-3020-4344-b185-5d79cf68187c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5j2w4\" (UID: \"6941d556-3020-4344-b185-5d79cf68187c\") " pod="openstack/nova-cell1-cell-mapping-5j2w4" Mar 20 16:02:57 crc kubenswrapper[4730]: I0320 16:02:57.535591 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6941d556-3020-4344-b185-5d79cf68187c-config-data\") pod \"nova-cell1-cell-mapping-5j2w4\" (UID: \"6941d556-3020-4344-b185-5d79cf68187c\") " pod="openstack/nova-cell1-cell-mapping-5j2w4" Mar 20 16:02:57 crc kubenswrapper[4730]: I0320 16:02:57.535989 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6rml\" (UniqueName: \"kubernetes.io/projected/6941d556-3020-4344-b185-5d79cf68187c-kube-api-access-p6rml\") pod \"nova-cell1-cell-mapping-5j2w4\" (UID: \"6941d556-3020-4344-b185-5d79cf68187c\") " pod="openstack/nova-cell1-cell-mapping-5j2w4" Mar 20 16:02:57 crc kubenswrapper[4730]: I0320 16:02:57.536078 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6941d556-3020-4344-b185-5d79cf68187c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5j2w4\" (UID: \"6941d556-3020-4344-b185-5d79cf68187c\") " pod="openstack/nova-cell1-cell-mapping-5j2w4" Mar 20 16:02:57 crc kubenswrapper[4730]: I0320 16:02:57.536173 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6941d556-3020-4344-b185-5d79cf68187c-scripts\") pod \"nova-cell1-cell-mapping-5j2w4\" (UID: \"6941d556-3020-4344-b185-5d79cf68187c\") " pod="openstack/nova-cell1-cell-mapping-5j2w4" Mar 20 16:02:57 crc kubenswrapper[4730]: I0320 16:02:57.541375 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6941d556-3020-4344-b185-5d79cf68187c-config-data\") pod \"nova-cell1-cell-mapping-5j2w4\" (UID: \"6941d556-3020-4344-b185-5d79cf68187c\") " pod="openstack/nova-cell1-cell-mapping-5j2w4" Mar 20 16:02:57 crc kubenswrapper[4730]: I0320 16:02:57.541813 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6941d556-3020-4344-b185-5d79cf68187c-scripts\") pod \"nova-cell1-cell-mapping-5j2w4\" (UID: \"6941d556-3020-4344-b185-5d79cf68187c\") " pod="openstack/nova-cell1-cell-mapping-5j2w4" Mar 20 16:02:57 crc kubenswrapper[4730]: I0320 16:02:57.553166 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6941d556-3020-4344-b185-5d79cf68187c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5j2w4\" (UID: \"6941d556-3020-4344-b185-5d79cf68187c\") " pod="openstack/nova-cell1-cell-mapping-5j2w4" Mar 20 16:02:57 crc kubenswrapper[4730]: I0320 16:02:57.564974 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6rml\" (UniqueName: \"kubernetes.io/projected/6941d556-3020-4344-b185-5d79cf68187c-kube-api-access-p6rml\") pod \"nova-cell1-cell-mapping-5j2w4\" (UID: \"6941d556-3020-4344-b185-5d79cf68187c\") " pod="openstack/nova-cell1-cell-mapping-5j2w4" Mar 20 16:02:57 crc kubenswrapper[4730]: I0320 16:02:57.666414 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5j2w4" Mar 20 16:02:57 crc kubenswrapper[4730]: I0320 16:02:57.959577 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-654455944c-qph9q" Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.051023 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-ovsdbserver-nb\") pod \"84937d37-8276-4014-b1ae-bb84547384af\" (UID: \"84937d37-8276-4014-b1ae-bb84547384af\") " Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.051073 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-dns-svc\") pod \"84937d37-8276-4014-b1ae-bb84547384af\" (UID: \"84937d37-8276-4014-b1ae-bb84547384af\") " Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.051111 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-dns-swift-storage-0\") pod \"84937d37-8276-4014-b1ae-bb84547384af\" (UID: \"84937d37-8276-4014-b1ae-bb84547384af\") " Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.051358 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-config\") pod \"84937d37-8276-4014-b1ae-bb84547384af\" (UID: \"84937d37-8276-4014-b1ae-bb84547384af\") " Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.051395 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-ovsdbserver-sb\") pod \"84937d37-8276-4014-b1ae-bb84547384af\" (UID: \"84937d37-8276-4014-b1ae-bb84547384af\") " Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.051475 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwnnn\" (UniqueName: \"kubernetes.io/projected/84937d37-8276-4014-b1ae-bb84547384af-kube-api-access-fwnnn\") pod \"84937d37-8276-4014-b1ae-bb84547384af\" (UID: \"84937d37-8276-4014-b1ae-bb84547384af\") " Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.072880 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84937d37-8276-4014-b1ae-bb84547384af-kube-api-access-fwnnn" (OuterVolumeSpecName: "kube-api-access-fwnnn") pod "84937d37-8276-4014-b1ae-bb84547384af" (UID: "84937d37-8276-4014-b1ae-bb84547384af"). InnerVolumeSpecName "kube-api-access-fwnnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.082522 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8","Type":"ContainerStarted","Data":"51c72e38c5d67c6135d394c4c7da074fc0fe8aedaa4b2e3b893209505e547395"} Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.086868 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2455b53b-7716-45b9-ac24-cd0bd892fbb9","Type":"ContainerStarted","Data":"51e9bde4b97cf7cfa7abc37598cc92c008ea5779caaf5ece80947fdb05a2e0e9"} Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.090272 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.090428 4730 generic.go:334] "Generic (PLEG): container finished" podID="84937d37-8276-4014-b1ae-bb84547384af" containerID="61e03f987a4e7a22fe8153ac2d6c60c19cb72977de582edcb281b2f18aeef951" exitCode=0 Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.091041 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-654455944c-qph9q" event={"ID":"84937d37-8276-4014-b1ae-bb84547384af","Type":"ContainerDied","Data":"61e03f987a4e7a22fe8153ac2d6c60c19cb72977de582edcb281b2f18aeef951"} Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.091076 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-654455944c-qph9q" event={"ID":"84937d37-8276-4014-b1ae-bb84547384af","Type":"ContainerDied","Data":"0a2eb8594326a606d45ae0159cba2be47a452aea3e7bbc81624368815408532b"} Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.091098 4730 scope.go:117] "RemoveContainer" containerID="61e03f987a4e7a22fe8153ac2d6c60c19cb72977de582edcb281b2f18aeef951" Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.090859 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-654455944c-qph9q" Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.112094 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.559597944 podStartE2EDuration="3.112067705s" podCreationTimestamp="2026-03-20 16:02:55 +0000 UTC" firstStartedPulling="2026-03-20 16:02:56.150370986 +0000 UTC m=+1435.363742355" lastFinishedPulling="2026-03-20 16:02:56.702840747 +0000 UTC m=+1435.916212116" observedRunningTime="2026-03-20 16:02:58.101841891 +0000 UTC m=+1437.315213260" watchObservedRunningTime="2026-03-20 16:02:58.112067705 +0000 UTC m=+1437.325439074" Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.133445 4730 scope.go:117] "RemoveContainer" containerID="a42a5cae6611c2fe3d170915f157931f092b64200d58d7b15c079992b8c3bd9a" Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.133955 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "84937d37-8276-4014-b1ae-bb84547384af" (UID: "84937d37-8276-4014-b1ae-bb84547384af"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.136150 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "84937d37-8276-4014-b1ae-bb84547384af" (UID: "84937d37-8276-4014-b1ae-bb84547384af"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.149131 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "84937d37-8276-4014-b1ae-bb84547384af" (UID: "84937d37-8276-4014-b1ae-bb84547384af"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.155360 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.155394 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.155405 4730 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.155416 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwnnn\" (UniqueName: \"kubernetes.io/projected/84937d37-8276-4014-b1ae-bb84547384af-kube-api-access-fwnnn\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.186268 4730 scope.go:117] "RemoveContainer" containerID="61e03f987a4e7a22fe8153ac2d6c60c19cb72977de582edcb281b2f18aeef951" Mar 20 16:02:58 crc kubenswrapper[4730]: E0320 16:02:58.186991 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61e03f987a4e7a22fe8153ac2d6c60c19cb72977de582edcb281b2f18aeef951\": container with ID starting with 61e03f987a4e7a22fe8153ac2d6c60c19cb72977de582edcb281b2f18aeef951 not found: ID does not exist" containerID="61e03f987a4e7a22fe8153ac2d6c60c19cb72977de582edcb281b2f18aeef951" Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.187042 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61e03f987a4e7a22fe8153ac2d6c60c19cb72977de582edcb281b2f18aeef951"} err="failed to get container status \"61e03f987a4e7a22fe8153ac2d6c60c19cb72977de582edcb281b2f18aeef951\": rpc error: code = NotFound desc = could not find container \"61e03f987a4e7a22fe8153ac2d6c60c19cb72977de582edcb281b2f18aeef951\": container with ID starting with 61e03f987a4e7a22fe8153ac2d6c60c19cb72977de582edcb281b2f18aeef951 not found: ID does not exist" Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.187077 4730 scope.go:117] "RemoveContainer" containerID="a42a5cae6611c2fe3d170915f157931f092b64200d58d7b15c079992b8c3bd9a" Mar 20 16:02:58 crc kubenswrapper[4730]: E0320 16:02:58.187452 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a42a5cae6611c2fe3d170915f157931f092b64200d58d7b15c079992b8c3bd9a\": container with ID starting with a42a5cae6611c2fe3d170915f157931f092b64200d58d7b15c079992b8c3bd9a not found: ID does not exist" containerID="a42a5cae6611c2fe3d170915f157931f092b64200d58d7b15c079992b8c3bd9a" Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.187487 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a42a5cae6611c2fe3d170915f157931f092b64200d58d7b15c079992b8c3bd9a"} err="failed to get container status \"a42a5cae6611c2fe3d170915f157931f092b64200d58d7b15c079992b8c3bd9a\": rpc error: code = NotFound desc = could not find container \"a42a5cae6611c2fe3d170915f157931f092b64200d58d7b15c079992b8c3bd9a\": container with ID starting with a42a5cae6611c2fe3d170915f157931f092b64200d58d7b15c079992b8c3bd9a not found: ID does not exist" Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.187743 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-config" (OuterVolumeSpecName: "config") pod "84937d37-8276-4014-b1ae-bb84547384af" (UID: "84937d37-8276-4014-b1ae-bb84547384af"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.188260 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "84937d37-8276-4014-b1ae-bb84547384af" (UID: "84937d37-8276-4014-b1ae-bb84547384af"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.224992 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5j2w4"] Mar 20 16:02:58 crc kubenswrapper[4730]: W0320 16:02:58.226815 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6941d556_3020_4344_b185_5d79cf68187c.slice/crio-dfc712c046f53bbab42da7c72012f52d748e148acd1b54785e8a3397492fd3b6 WatchSource:0}: Error finding container dfc712c046f53bbab42da7c72012f52d748e148acd1b54785e8a3397492fd3b6: Status 404 returned error can't find the container with id dfc712c046f53bbab42da7c72012f52d748e148acd1b54785e8a3397492fd3b6 Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.262061 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-config\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.262242 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84937d37-8276-4014-b1ae-bb84547384af-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.433308 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-654455944c-qph9q"] Mar 20 16:02:58 crc kubenswrapper[4730]: I0320 16:02:58.443739 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-654455944c-qph9q"] Mar 20 16:02:59 crc kubenswrapper[4730]: I0320 16:02:59.108860 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5j2w4" event={"ID":"6941d556-3020-4344-b185-5d79cf68187c","Type":"ContainerStarted","Data":"c002961b60e7f958b6eac722566b65c8c9c5ccb02bb7acae14d9879bae50e4f2"} Mar 20 16:02:59 crc kubenswrapper[4730]: I0320 16:02:59.109219 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5j2w4" event={"ID":"6941d556-3020-4344-b185-5d79cf68187c","Type":"ContainerStarted","Data":"dfc712c046f53bbab42da7c72012f52d748e148acd1b54785e8a3397492fd3b6"} Mar 20 16:02:59 crc kubenswrapper[4730]: I0320 16:02:59.130053 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-5j2w4" podStartSLOduration=2.130031286 podStartE2EDuration="2.130031286s" podCreationTimestamp="2026-03-20 16:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:02:59.12944848 +0000 UTC m=+1438.342819859" watchObservedRunningTime="2026-03-20 16:02:59.130031286 +0000 UTC m=+1438.343402655" Mar 20 16:02:59 crc kubenswrapper[4730]: I0320 16:02:59.548458 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84937d37-8276-4014-b1ae-bb84547384af" path="/var/lib/kubelet/pods/84937d37-8276-4014-b1ae-bb84547384af/volumes" Mar 20 16:03:00 crc kubenswrapper[4730]: I0320 16:03:00.120650 4730 generic.go:334] "Generic (PLEG): container finished" podID="15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8" containerID="a2770542ad305768cf6cad907c5fcf1dddba97a517e618481d482e8c5742f864" exitCode=1 Mar 20 16:03:00 crc kubenswrapper[4730]: I0320 16:03:00.121049 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8" containerName="ceilometer-notification-agent" containerID="cri-o://60a8e4234dec296e07fa2e6b25c8cd3d8400e9e7059761ba834d2524bf4e7bc9" gracePeriod=30 Mar 20 16:03:00 crc kubenswrapper[4730]: I0320 16:03:00.120691 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8","Type":"ContainerDied","Data":"a2770542ad305768cf6cad907c5fcf1dddba97a517e618481d482e8c5742f864"} Mar 20 16:03:00 crc kubenswrapper[4730]: I0320 16:03:00.120997 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8" containerName="sg-core" containerID="cri-o://51c72e38c5d67c6135d394c4c7da074fc0fe8aedaa4b2e3b893209505e547395" gracePeriod=30 Mar 20 16:03:00 crc kubenswrapper[4730]: I0320 16:03:00.120837 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8" containerName="ceilometer-central-agent" containerID="cri-o://7a8aa45fa61f1cac40b9b89dc3a2959bc5ab43046293be849c2cd09eec70c60a" gracePeriod=30 Mar 20 16:03:01 crc kubenswrapper[4730]: I0320 16:03:01.135218 4730 generic.go:334] "Generic (PLEG): container finished" podID="15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8" containerID="51c72e38c5d67c6135d394c4c7da074fc0fe8aedaa4b2e3b893209505e547395" exitCode=2 Mar 20 16:03:01 crc kubenswrapper[4730]: I0320 16:03:01.135330 4730 generic.go:334] "Generic (PLEG): container finished" podID="15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8" containerID="60a8e4234dec296e07fa2e6b25c8cd3d8400e9e7059761ba834d2524bf4e7bc9" exitCode=0 Mar 20 16:03:01 crc kubenswrapper[4730]: I0320 16:03:01.135295 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8","Type":"ContainerDied","Data":"51c72e38c5d67c6135d394c4c7da074fc0fe8aedaa4b2e3b893209505e547395"} Mar 20 16:03:01 crc kubenswrapper[4730]: I0320 16:03:01.135364 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8","Type":"ContainerDied","Data":"60a8e4234dec296e07fa2e6b25c8cd3d8400e9e7059761ba834d2524bf4e7bc9"} Mar 20 16:03:02 crc kubenswrapper[4730]: I0320 16:03:02.938291 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.066588 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-config-data\") pod \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\" (UID: \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\") " Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.066641 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-scripts\") pod \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\" (UID: \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\") " Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.066836 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-log-httpd\") pod \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\" (UID: \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\") " Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.066873 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddg7t\" (UniqueName: \"kubernetes.io/projected/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-kube-api-access-ddg7t\") pod \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\" (UID: \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\") " Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.066905 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-run-httpd\") pod \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\" (UID: \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\") " Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.066947 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-sg-core-conf-yaml\") pod \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\" (UID: \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\") " Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.067016 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-combined-ca-bundle\") pod \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\" (UID: \"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8\") " Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.072015 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8" (UID: "15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.072132 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-scripts" (OuterVolumeSpecName: "scripts") pod "15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8" (UID: "15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.072880 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8" (UID: "15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.083445 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-kube-api-access-ddg7t" (OuterVolumeSpecName: "kube-api-access-ddg7t") pod "15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8" (UID: "15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8"). InnerVolumeSpecName "kube-api-access-ddg7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.114565 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8" (UID: "15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.145096 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8" (UID: "15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.159352 4730 generic.go:334] "Generic (PLEG): container finished" podID="15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8" containerID="7a8aa45fa61f1cac40b9b89dc3a2959bc5ab43046293be849c2cd09eec70c60a" exitCode=0 Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.159403 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8","Type":"ContainerDied","Data":"7a8aa45fa61f1cac40b9b89dc3a2959bc5ab43046293be849c2cd09eec70c60a"} Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.159436 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8","Type":"ContainerDied","Data":"2d283fdd4c1c73728f70c718eab039966732590eed0ec087942b19f09e1c496d"} Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.159456 4730 scope.go:117] "RemoveContainer" containerID="a2770542ad305768cf6cad907c5fcf1dddba97a517e618481d482e8c5742f864" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.159653 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.170119 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.170156 4730 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.170173 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddg7t\" (UniqueName: \"kubernetes.io/projected/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-kube-api-access-ddg7t\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.170186 4730 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.170198 4730 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.170209 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.182583 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-config-data" (OuterVolumeSpecName: "config-data") pod "15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8" (UID: "15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.187643 4730 scope.go:117] "RemoveContainer" containerID="51c72e38c5d67c6135d394c4c7da074fc0fe8aedaa4b2e3b893209505e547395" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.206678 4730 scope.go:117] "RemoveContainer" containerID="60a8e4234dec296e07fa2e6b25c8cd3d8400e9e7059761ba834d2524bf4e7bc9" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.227702 4730 scope.go:117] "RemoveContainer" containerID="7a8aa45fa61f1cac40b9b89dc3a2959bc5ab43046293be849c2cd09eec70c60a" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.246782 4730 scope.go:117] "RemoveContainer" containerID="a2770542ad305768cf6cad907c5fcf1dddba97a517e618481d482e8c5742f864" Mar 20 16:03:03 crc kubenswrapper[4730]: E0320 16:03:03.247144 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2770542ad305768cf6cad907c5fcf1dddba97a517e618481d482e8c5742f864\": container with ID starting with a2770542ad305768cf6cad907c5fcf1dddba97a517e618481d482e8c5742f864 not found: ID does not exist" containerID="a2770542ad305768cf6cad907c5fcf1dddba97a517e618481d482e8c5742f864" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.247181 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2770542ad305768cf6cad907c5fcf1dddba97a517e618481d482e8c5742f864"} err="failed to get container status \"a2770542ad305768cf6cad907c5fcf1dddba97a517e618481d482e8c5742f864\": rpc error: code = NotFound desc = could not find container \"a2770542ad305768cf6cad907c5fcf1dddba97a517e618481d482e8c5742f864\": container with ID starting with a2770542ad305768cf6cad907c5fcf1dddba97a517e618481d482e8c5742f864 not found: ID does not exist" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.247203 4730 scope.go:117] "RemoveContainer" containerID="51c72e38c5d67c6135d394c4c7da074fc0fe8aedaa4b2e3b893209505e547395" Mar 20 16:03:03 crc kubenswrapper[4730]: E0320 16:03:03.247519 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51c72e38c5d67c6135d394c4c7da074fc0fe8aedaa4b2e3b893209505e547395\": container with ID starting with 51c72e38c5d67c6135d394c4c7da074fc0fe8aedaa4b2e3b893209505e547395 not found: ID does not exist" containerID="51c72e38c5d67c6135d394c4c7da074fc0fe8aedaa4b2e3b893209505e547395" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.247568 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51c72e38c5d67c6135d394c4c7da074fc0fe8aedaa4b2e3b893209505e547395"} err="failed to get container status \"51c72e38c5d67c6135d394c4c7da074fc0fe8aedaa4b2e3b893209505e547395\": rpc error: code = NotFound desc = could not find container \"51c72e38c5d67c6135d394c4c7da074fc0fe8aedaa4b2e3b893209505e547395\": container with ID starting with 51c72e38c5d67c6135d394c4c7da074fc0fe8aedaa4b2e3b893209505e547395 not found: ID does not exist" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.247600 4730 scope.go:117] "RemoveContainer" containerID="60a8e4234dec296e07fa2e6b25c8cd3d8400e9e7059761ba834d2524bf4e7bc9" Mar 20 16:03:03 crc kubenswrapper[4730]: E0320 16:03:03.247858 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60a8e4234dec296e07fa2e6b25c8cd3d8400e9e7059761ba834d2524bf4e7bc9\": container with ID starting with 60a8e4234dec296e07fa2e6b25c8cd3d8400e9e7059761ba834d2524bf4e7bc9 not found: ID does not exist" containerID="60a8e4234dec296e07fa2e6b25c8cd3d8400e9e7059761ba834d2524bf4e7bc9" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.247886 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60a8e4234dec296e07fa2e6b25c8cd3d8400e9e7059761ba834d2524bf4e7bc9"} err="failed to get container status \"60a8e4234dec296e07fa2e6b25c8cd3d8400e9e7059761ba834d2524bf4e7bc9\": rpc error: code = NotFound desc = could not find container \"60a8e4234dec296e07fa2e6b25c8cd3d8400e9e7059761ba834d2524bf4e7bc9\": container with ID starting with 60a8e4234dec296e07fa2e6b25c8cd3d8400e9e7059761ba834d2524bf4e7bc9 not found: ID does not exist" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.247902 4730 scope.go:117] "RemoveContainer" containerID="7a8aa45fa61f1cac40b9b89dc3a2959bc5ab43046293be849c2cd09eec70c60a" Mar 20 16:03:03 crc kubenswrapper[4730]: E0320 16:03:03.248070 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a8aa45fa61f1cac40b9b89dc3a2959bc5ab43046293be849c2cd09eec70c60a\": container with ID starting with 7a8aa45fa61f1cac40b9b89dc3a2959bc5ab43046293be849c2cd09eec70c60a not found: ID does not exist" containerID="7a8aa45fa61f1cac40b9b89dc3a2959bc5ab43046293be849c2cd09eec70c60a" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.248112 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a8aa45fa61f1cac40b9b89dc3a2959bc5ab43046293be849c2cd09eec70c60a"} err="failed to get container status \"7a8aa45fa61f1cac40b9b89dc3a2959bc5ab43046293be849c2cd09eec70c60a\": rpc error: code = NotFound desc = could not find container \"7a8aa45fa61f1cac40b9b89dc3a2959bc5ab43046293be849c2cd09eec70c60a\": container with ID starting with 7a8aa45fa61f1cac40b9b89dc3a2959bc5ab43046293be849c2cd09eec70c60a not found: ID does not exist" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.271748 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.433800 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.433840 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.512873 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.526458 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.573480 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8" path="/var/lib/kubelet/pods/15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8/volumes" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.574904 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:03:03 crc kubenswrapper[4730]: E0320 16:03:03.575653 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84937d37-8276-4014-b1ae-bb84547384af" containerName="init" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.575676 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="84937d37-8276-4014-b1ae-bb84547384af" containerName="init" Mar 20 16:03:03 crc kubenswrapper[4730]: E0320 16:03:03.575689 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84937d37-8276-4014-b1ae-bb84547384af" containerName="dnsmasq-dns" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.575697 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="84937d37-8276-4014-b1ae-bb84547384af" containerName="dnsmasq-dns" Mar 20 16:03:03 crc kubenswrapper[4730]: E0320 16:03:03.575713 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8" containerName="sg-core" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.575721 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8" containerName="sg-core" Mar 20 16:03:03 crc kubenswrapper[4730]: E0320 16:03:03.575734 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8" containerName="proxy-httpd" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.575742 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8" containerName="proxy-httpd" Mar 20 16:03:03 crc kubenswrapper[4730]: E0320 16:03:03.575775 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8" containerName="ceilometer-notification-agent" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.575783 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8" containerName="ceilometer-notification-agent" Mar 20 16:03:03 crc kubenswrapper[4730]: E0320 16:03:03.575802 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8" containerName="ceilometer-central-agent" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.575810 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8" containerName="ceilometer-central-agent" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.576032 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8" containerName="ceilometer-notification-agent" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.576047 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8" containerName="ceilometer-central-agent" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.576099 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8" containerName="proxy-httpd" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.576124 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="84937d37-8276-4014-b1ae-bb84547384af" containerName="dnsmasq-dns" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.576143 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="15b8ac2f-7010-4b62-a0ed-a5fadb57b0a8" containerName="sg-core" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.578548 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.578656 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.581848 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.581877 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.581848 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.679350 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2333c0f3-d6ce-405f-b8c8-755be42ba74b-run-httpd\") pod \"ceilometer-0\" (UID: \"2333c0f3-d6ce-405f-b8c8-755be42ba74b\") " pod="openstack/ceilometer-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.679493 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95n5w\" (UniqueName: \"kubernetes.io/projected/2333c0f3-d6ce-405f-b8c8-755be42ba74b-kube-api-access-95n5w\") pod \"ceilometer-0\" (UID: \"2333c0f3-d6ce-405f-b8c8-755be42ba74b\") " pod="openstack/ceilometer-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.679558 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2333c0f3-d6ce-405f-b8c8-755be42ba74b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2333c0f3-d6ce-405f-b8c8-755be42ba74b\") " pod="openstack/ceilometer-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.679645 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2333c0f3-d6ce-405f-b8c8-755be42ba74b-log-httpd\") pod \"ceilometer-0\" (UID: \"2333c0f3-d6ce-405f-b8c8-755be42ba74b\") " pod="openstack/ceilometer-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.679744 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2333c0f3-d6ce-405f-b8c8-755be42ba74b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2333c0f3-d6ce-405f-b8c8-755be42ba74b\") " pod="openstack/ceilometer-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.679768 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2333c0f3-d6ce-405f-b8c8-755be42ba74b-config-data\") pod \"ceilometer-0\" (UID: \"2333c0f3-d6ce-405f-b8c8-755be42ba74b\") " pod="openstack/ceilometer-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.679807 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2333c0f3-d6ce-405f-b8c8-755be42ba74b-scripts\") pod \"ceilometer-0\" (UID: \"2333c0f3-d6ce-405f-b8c8-755be42ba74b\") " pod="openstack/ceilometer-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.679851 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2333c0f3-d6ce-405f-b8c8-755be42ba74b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2333c0f3-d6ce-405f-b8c8-755be42ba74b\") " pod="openstack/ceilometer-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.781864 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95n5w\" (UniqueName: \"kubernetes.io/projected/2333c0f3-d6ce-405f-b8c8-755be42ba74b-kube-api-access-95n5w\") pod \"ceilometer-0\" (UID: \"2333c0f3-d6ce-405f-b8c8-755be42ba74b\") " pod="openstack/ceilometer-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.782133 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2333c0f3-d6ce-405f-b8c8-755be42ba74b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2333c0f3-d6ce-405f-b8c8-755be42ba74b\") " pod="openstack/ceilometer-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.782187 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2333c0f3-d6ce-405f-b8c8-755be42ba74b-log-httpd\") pod \"ceilometer-0\" (UID: \"2333c0f3-d6ce-405f-b8c8-755be42ba74b\") " pod="openstack/ceilometer-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.782229 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2333c0f3-d6ce-405f-b8c8-755be42ba74b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2333c0f3-d6ce-405f-b8c8-755be42ba74b\") " pod="openstack/ceilometer-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.782261 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2333c0f3-d6ce-405f-b8c8-755be42ba74b-config-data\") pod \"ceilometer-0\" (UID: \"2333c0f3-d6ce-405f-b8c8-755be42ba74b\") " pod="openstack/ceilometer-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.782292 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2333c0f3-d6ce-405f-b8c8-755be42ba74b-scripts\") pod \"ceilometer-0\" (UID: \"2333c0f3-d6ce-405f-b8c8-755be42ba74b\") " pod="openstack/ceilometer-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.782323 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2333c0f3-d6ce-405f-b8c8-755be42ba74b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2333c0f3-d6ce-405f-b8c8-755be42ba74b\") " pod="openstack/ceilometer-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.782347 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2333c0f3-d6ce-405f-b8c8-755be42ba74b-run-httpd\") pod \"ceilometer-0\" (UID: \"2333c0f3-d6ce-405f-b8c8-755be42ba74b\") " pod="openstack/ceilometer-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.782698 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2333c0f3-d6ce-405f-b8c8-755be42ba74b-run-httpd\") pod \"ceilometer-0\" (UID: \"2333c0f3-d6ce-405f-b8c8-755be42ba74b\") " pod="openstack/ceilometer-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.782922 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2333c0f3-d6ce-405f-b8c8-755be42ba74b-log-httpd\") pod \"ceilometer-0\" (UID: \"2333c0f3-d6ce-405f-b8c8-755be42ba74b\") " pod="openstack/ceilometer-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.787406 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2333c0f3-d6ce-405f-b8c8-755be42ba74b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2333c0f3-d6ce-405f-b8c8-755be42ba74b\") " pod="openstack/ceilometer-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.788043 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2333c0f3-d6ce-405f-b8c8-755be42ba74b-scripts\") pod \"ceilometer-0\" (UID: \"2333c0f3-d6ce-405f-b8c8-755be42ba74b\") " pod="openstack/ceilometer-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.788757 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2333c0f3-d6ce-405f-b8c8-755be42ba74b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2333c0f3-d6ce-405f-b8c8-755be42ba74b\") " pod="openstack/ceilometer-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.802772 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95n5w\" (UniqueName: \"kubernetes.io/projected/2333c0f3-d6ce-405f-b8c8-755be42ba74b-kube-api-access-95n5w\") pod \"ceilometer-0\" (UID: \"2333c0f3-d6ce-405f-b8c8-755be42ba74b\") " pod="openstack/ceilometer-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.804007 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2333c0f3-d6ce-405f-b8c8-755be42ba74b-config-data\") pod \"ceilometer-0\" (UID: \"2333c0f3-d6ce-405f-b8c8-755be42ba74b\") " pod="openstack/ceilometer-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.811975 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2333c0f3-d6ce-405f-b8c8-755be42ba74b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2333c0f3-d6ce-405f-b8c8-755be42ba74b\") " pod="openstack/ceilometer-0" Mar 20 16:03:03 crc kubenswrapper[4730]: I0320 16:03:03.987269 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 16:03:04 crc kubenswrapper[4730]: I0320 16:03:04.186571 4730 generic.go:334] "Generic (PLEG): container finished" podID="6941d556-3020-4344-b185-5d79cf68187c" containerID="c002961b60e7f958b6eac722566b65c8c9c5ccb02bb7acae14d9879bae50e4f2" exitCode=0 Mar 20 16:03:04 crc kubenswrapper[4730]: I0320 16:03:04.186859 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5j2w4" event={"ID":"6941d556-3020-4344-b185-5d79cf68187c","Type":"ContainerDied","Data":"c002961b60e7f958b6eac722566b65c8c9c5ccb02bb7acae14d9879bae50e4f2"} Mar 20 16:03:04 crc kubenswrapper[4730]: I0320 16:03:04.450536 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a875c7b4-22fb-4b91-803c-09a7a439aea1" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.221:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 16:03:04 crc kubenswrapper[4730]: I0320 16:03:04.450570 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a875c7b4-22fb-4b91-803c-09a7a439aea1" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.221:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 16:03:04 crc kubenswrapper[4730]: I0320 16:03:04.466064 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 16:03:04 crc kubenswrapper[4730]: W0320 16:03:04.468939 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2333c0f3_d6ce_405f_b8c8_755be42ba74b.slice/crio-1299e2d7df514e1a97295b01275fe6d255ee41e4697438fd7f4e6ddef4421c83 WatchSource:0}: Error finding container 1299e2d7df514e1a97295b01275fe6d255ee41e4697438fd7f4e6ddef4421c83: Status 404 returned error can't find the container with id 1299e2d7df514e1a97295b01275fe6d255ee41e4697438fd7f4e6ddef4421c83 Mar 20 16:03:05 crc kubenswrapper[4730]: I0320 16:03:05.198410 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2333c0f3-d6ce-405f-b8c8-755be42ba74b","Type":"ContainerStarted","Data":"96c436e4c4ad2205424a867b0300ce8ea915a1f696fca311e0994be16783f1d9"} Mar 20 16:03:05 crc kubenswrapper[4730]: I0320 16:03:05.198518 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2333c0f3-d6ce-405f-b8c8-755be42ba74b","Type":"ContainerStarted","Data":"05214ba2af35aa37b13a0a689fa73f69aedb3ffa941dcd2d56500efb0971fd16"} Mar 20 16:03:05 crc kubenswrapper[4730]: I0320 16:03:05.198540 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2333c0f3-d6ce-405f-b8c8-755be42ba74b","Type":"ContainerStarted","Data":"1299e2d7df514e1a97295b01275fe6d255ee41e4697438fd7f4e6ddef4421c83"} Mar 20 16:03:05 crc kubenswrapper[4730]: I0320 16:03:05.522321 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5j2w4" Mar 20 16:03:05 crc kubenswrapper[4730]: I0320 16:03:05.606850 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 20 16:03:05 crc kubenswrapper[4730]: I0320 16:03:05.618095 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6rml\" (UniqueName: \"kubernetes.io/projected/6941d556-3020-4344-b185-5d79cf68187c-kube-api-access-p6rml\") pod \"6941d556-3020-4344-b185-5d79cf68187c\" (UID: \"6941d556-3020-4344-b185-5d79cf68187c\") " Mar 20 16:03:05 crc kubenswrapper[4730]: I0320 16:03:05.618171 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6941d556-3020-4344-b185-5d79cf68187c-scripts\") pod \"6941d556-3020-4344-b185-5d79cf68187c\" (UID: \"6941d556-3020-4344-b185-5d79cf68187c\") " Mar 20 16:03:05 crc kubenswrapper[4730]: I0320 16:03:05.618199 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6941d556-3020-4344-b185-5d79cf68187c-config-data\") pod \"6941d556-3020-4344-b185-5d79cf68187c\" (UID: \"6941d556-3020-4344-b185-5d79cf68187c\") " Mar 20 16:03:05 crc kubenswrapper[4730]: I0320 16:03:05.618338 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6941d556-3020-4344-b185-5d79cf68187c-combined-ca-bundle\") pod \"6941d556-3020-4344-b185-5d79cf68187c\" (UID: \"6941d556-3020-4344-b185-5d79cf68187c\") " Mar 20 16:03:05 crc kubenswrapper[4730]: I0320 16:03:05.642035 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6941d556-3020-4344-b185-5d79cf68187c-scripts" (OuterVolumeSpecName: "scripts") pod "6941d556-3020-4344-b185-5d79cf68187c" (UID: "6941d556-3020-4344-b185-5d79cf68187c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:03:05 crc kubenswrapper[4730]: I0320 16:03:05.645820 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6941d556-3020-4344-b185-5d79cf68187c-kube-api-access-p6rml" (OuterVolumeSpecName: "kube-api-access-p6rml") pod "6941d556-3020-4344-b185-5d79cf68187c" (UID: "6941d556-3020-4344-b185-5d79cf68187c"). InnerVolumeSpecName "kube-api-access-p6rml". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:03:05 crc kubenswrapper[4730]: I0320 16:03:05.661531 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6941d556-3020-4344-b185-5d79cf68187c-config-data" (OuterVolumeSpecName: "config-data") pod "6941d556-3020-4344-b185-5d79cf68187c" (UID: "6941d556-3020-4344-b185-5d79cf68187c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:03:05 crc kubenswrapper[4730]: I0320 16:03:05.680926 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6941d556-3020-4344-b185-5d79cf68187c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6941d556-3020-4344-b185-5d79cf68187c" (UID: "6941d556-3020-4344-b185-5d79cf68187c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:03:05 crc kubenswrapper[4730]: I0320 16:03:05.721040 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6rml\" (UniqueName: \"kubernetes.io/projected/6941d556-3020-4344-b185-5d79cf68187c-kube-api-access-p6rml\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:05 crc kubenswrapper[4730]: I0320 16:03:05.721071 4730 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6941d556-3020-4344-b185-5d79cf68187c-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:05 crc kubenswrapper[4730]: I0320 16:03:05.721081 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6941d556-3020-4344-b185-5d79cf68187c-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:05 crc kubenswrapper[4730]: I0320 16:03:05.721090 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6941d556-3020-4344-b185-5d79cf68187c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:06 crc kubenswrapper[4730]: I0320 16:03:06.211061 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5j2w4" event={"ID":"6941d556-3020-4344-b185-5d79cf68187c","Type":"ContainerDied","Data":"dfc712c046f53bbab42da7c72012f52d748e148acd1b54785e8a3397492fd3b6"} Mar 20 16:03:06 crc kubenswrapper[4730]: I0320 16:03:06.211428 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfc712c046f53bbab42da7c72012f52d748e148acd1b54785e8a3397492fd3b6" Mar 20 16:03:06 crc kubenswrapper[4730]: I0320 16:03:06.211176 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5j2w4" Mar 20 16:03:06 crc kubenswrapper[4730]: I0320 16:03:06.393194 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 16:03:06 crc kubenswrapper[4730]: I0320 16:03:06.393625 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a875c7b4-22fb-4b91-803c-09a7a439aea1" containerName="nova-api-log" containerID="cri-o://5b5163f015343a503b8bb1265422ad47154d41b97a20358ae2eebcfb9b02968f" gracePeriod=30 Mar 20 16:03:06 crc kubenswrapper[4730]: I0320 16:03:06.394131 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a875c7b4-22fb-4b91-803c-09a7a439aea1" containerName="nova-api-api" containerID="cri-o://8ad5725e9930431c50bd1862d3f763c6380098c554ce816d54c8f41bdc9481c4" gracePeriod=30 Mar 20 16:03:06 crc kubenswrapper[4730]: I0320 16:03:06.405953 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 16:03:06 crc kubenswrapper[4730]: I0320 16:03:06.408058 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="3edc02b8-2451-4edf-a79d-fc86a078de83" containerName="nova-scheduler-scheduler" containerID="cri-o://f01bb1ce3fdb271555994504b12501ddebf5c6f44d1ae92e8481bfe809a7796f" gracePeriod=30 Mar 20 16:03:06 crc kubenswrapper[4730]: I0320 16:03:06.427225 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 16:03:06 crc kubenswrapper[4730]: I0320 16:03:06.427738 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c9bfb6c0-4971-4a58-aacc-17636a95b8a4" containerName="nova-metadata-log" containerID="cri-o://a81bd5c7f3aa448b067be4239c5895b4e08b8ed1cbb446ef78371ac5389a73a9" gracePeriod=30 Mar 20 16:03:06 crc kubenswrapper[4730]: I0320 16:03:06.427732 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c9bfb6c0-4971-4a58-aacc-17636a95b8a4" containerName="nova-metadata-metadata" containerID="cri-o://4ea91441433de2c9d6a90ef65cf2c1e62d0cba102e976dcb6c1e329ad4173188" gracePeriod=30 Mar 20 16:03:07 crc kubenswrapper[4730]: I0320 16:03:07.233063 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2333c0f3-d6ce-405f-b8c8-755be42ba74b","Type":"ContainerStarted","Data":"6346e0485e2a6b37dd5fec8edf65ac491ec6f80b862b5bb6f83d9906e4e8fe7e"} Mar 20 16:03:07 crc kubenswrapper[4730]: I0320 16:03:07.239177 4730 generic.go:334] "Generic (PLEG): container finished" podID="c9bfb6c0-4971-4a58-aacc-17636a95b8a4" containerID="a81bd5c7f3aa448b067be4239c5895b4e08b8ed1cbb446ef78371ac5389a73a9" exitCode=143 Mar 20 16:03:07 crc kubenswrapper[4730]: I0320 16:03:07.239258 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c9bfb6c0-4971-4a58-aacc-17636a95b8a4","Type":"ContainerDied","Data":"a81bd5c7f3aa448b067be4239c5895b4e08b8ed1cbb446ef78371ac5389a73a9"} Mar 20 16:03:07 crc kubenswrapper[4730]: I0320 16:03:07.242457 4730 generic.go:334] "Generic (PLEG): container finished" podID="a875c7b4-22fb-4b91-803c-09a7a439aea1" containerID="5b5163f015343a503b8bb1265422ad47154d41b97a20358ae2eebcfb9b02968f" exitCode=143 Mar 20 16:03:07 crc kubenswrapper[4730]: I0320 16:03:07.242503 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a875c7b4-22fb-4b91-803c-09a7a439aea1","Type":"ContainerDied","Data":"5b5163f015343a503b8bb1265422ad47154d41b97a20358ae2eebcfb9b02968f"} Mar 20 16:03:07 crc kubenswrapper[4730]: I0320 16:03:07.803701 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 16:03:07 crc kubenswrapper[4730]: I0320 16:03:07.809500 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 16:03:07 crc kubenswrapper[4730]: I0320 16:03:07.970952 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3edc02b8-2451-4edf-a79d-fc86a078de83-config-data\") pod \"3edc02b8-2451-4edf-a79d-fc86a078de83\" (UID: \"3edc02b8-2451-4edf-a79d-fc86a078de83\") " Mar 20 16:03:07 crc kubenswrapper[4730]: I0320 16:03:07.971106 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3edc02b8-2451-4edf-a79d-fc86a078de83-combined-ca-bundle\") pod \"3edc02b8-2451-4edf-a79d-fc86a078de83\" (UID: \"3edc02b8-2451-4edf-a79d-fc86a078de83\") " Mar 20 16:03:07 crc kubenswrapper[4730]: I0320 16:03:07.971156 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-config-data\") pod \"c9bfb6c0-4971-4a58-aacc-17636a95b8a4\" (UID: \"c9bfb6c0-4971-4a58-aacc-17636a95b8a4\") " Mar 20 16:03:07 crc kubenswrapper[4730]: I0320 16:03:07.971327 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-logs\") pod \"c9bfb6c0-4971-4a58-aacc-17636a95b8a4\" (UID: \"c9bfb6c0-4971-4a58-aacc-17636a95b8a4\") " Mar 20 16:03:07 crc kubenswrapper[4730]: I0320 16:03:07.971369 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xws9p\" (UniqueName: \"kubernetes.io/projected/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-kube-api-access-xws9p\") pod \"c9bfb6c0-4971-4a58-aacc-17636a95b8a4\" (UID: \"c9bfb6c0-4971-4a58-aacc-17636a95b8a4\") " Mar 20 16:03:07 crc kubenswrapper[4730]: I0320 16:03:07.971401 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-combined-ca-bundle\") pod \"c9bfb6c0-4971-4a58-aacc-17636a95b8a4\" (UID: \"c9bfb6c0-4971-4a58-aacc-17636a95b8a4\") " Mar 20 16:03:07 crc kubenswrapper[4730]: I0320 16:03:07.971435 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-nova-metadata-tls-certs\") pod \"c9bfb6c0-4971-4a58-aacc-17636a95b8a4\" (UID: \"c9bfb6c0-4971-4a58-aacc-17636a95b8a4\") " Mar 20 16:03:07 crc kubenswrapper[4730]: I0320 16:03:07.971453 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrnlf\" (UniqueName: \"kubernetes.io/projected/3edc02b8-2451-4edf-a79d-fc86a078de83-kube-api-access-qrnlf\") pod \"3edc02b8-2451-4edf-a79d-fc86a078de83\" (UID: \"3edc02b8-2451-4edf-a79d-fc86a078de83\") " Mar 20 16:03:07 crc kubenswrapper[4730]: I0320 16:03:07.972945 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-logs" (OuterVolumeSpecName: "logs") pod "c9bfb6c0-4971-4a58-aacc-17636a95b8a4" (UID: "c9bfb6c0-4971-4a58-aacc-17636a95b8a4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:03:07 crc kubenswrapper[4730]: I0320 16:03:07.978514 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3edc02b8-2451-4edf-a79d-fc86a078de83-kube-api-access-qrnlf" (OuterVolumeSpecName: "kube-api-access-qrnlf") pod "3edc02b8-2451-4edf-a79d-fc86a078de83" (UID: "3edc02b8-2451-4edf-a79d-fc86a078de83"). InnerVolumeSpecName "kube-api-access-qrnlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:03:07 crc kubenswrapper[4730]: I0320 16:03:07.980723 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-kube-api-access-xws9p" (OuterVolumeSpecName: "kube-api-access-xws9p") pod "c9bfb6c0-4971-4a58-aacc-17636a95b8a4" (UID: "c9bfb6c0-4971-4a58-aacc-17636a95b8a4"). InnerVolumeSpecName "kube-api-access-xws9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.012194 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9bfb6c0-4971-4a58-aacc-17636a95b8a4" (UID: "c9bfb6c0-4971-4a58-aacc-17636a95b8a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.012228 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3edc02b8-2451-4edf-a79d-fc86a078de83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3edc02b8-2451-4edf-a79d-fc86a078de83" (UID: "3edc02b8-2451-4edf-a79d-fc86a078de83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.016199 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-config-data" (OuterVolumeSpecName: "config-data") pod "c9bfb6c0-4971-4a58-aacc-17636a95b8a4" (UID: "c9bfb6c0-4971-4a58-aacc-17636a95b8a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.016190 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3edc02b8-2451-4edf-a79d-fc86a078de83-config-data" (OuterVolumeSpecName: "config-data") pod "3edc02b8-2451-4edf-a79d-fc86a078de83" (UID: "3edc02b8-2451-4edf-a79d-fc86a078de83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.037486 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "c9bfb6c0-4971-4a58-aacc-17636a95b8a4" (UID: "c9bfb6c0-4971-4a58-aacc-17636a95b8a4"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.073428 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3edc02b8-2451-4edf-a79d-fc86a078de83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.073469 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.073479 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-logs\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.073489 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xws9p\" (UniqueName: \"kubernetes.io/projected/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-kube-api-access-xws9p\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.073500 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.073511 4730 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9bfb6c0-4971-4a58-aacc-17636a95b8a4-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.073521 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrnlf\" (UniqueName: \"kubernetes.io/projected/3edc02b8-2451-4edf-a79d-fc86a078de83-kube-api-access-qrnlf\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.073529 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3edc02b8-2451-4edf-a79d-fc86a078de83-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.256220 4730 generic.go:334] "Generic (PLEG): container finished" podID="3edc02b8-2451-4edf-a79d-fc86a078de83" containerID="f01bb1ce3fdb271555994504b12501ddebf5c6f44d1ae92e8481bfe809a7796f" exitCode=0 Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.256283 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3edc02b8-2451-4edf-a79d-fc86a078de83","Type":"ContainerDied","Data":"f01bb1ce3fdb271555994504b12501ddebf5c6f44d1ae92e8481bfe809a7796f"} Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.256328 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3edc02b8-2451-4edf-a79d-fc86a078de83","Type":"ContainerDied","Data":"b529fd31ef154a115120a0df54b6ded5ddefe9a8e3d6d2b8b099bc3bd1c28517"} Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.256351 4730 scope.go:117] "RemoveContainer" containerID="f01bb1ce3fdb271555994504b12501ddebf5c6f44d1ae92e8481bfe809a7796f" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.257495 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.259178 4730 generic.go:334] "Generic (PLEG): container finished" podID="c9bfb6c0-4971-4a58-aacc-17636a95b8a4" containerID="4ea91441433de2c9d6a90ef65cf2c1e62d0cba102e976dcb6c1e329ad4173188" exitCode=0 Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.259210 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c9bfb6c0-4971-4a58-aacc-17636a95b8a4","Type":"ContainerDied","Data":"4ea91441433de2c9d6a90ef65cf2c1e62d0cba102e976dcb6c1e329ad4173188"} Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.259234 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c9bfb6c0-4971-4a58-aacc-17636a95b8a4","Type":"ContainerDied","Data":"26008e0cde0b10d5cf635c86bad21a3a5eba1afc3f588c8674fe357301fcf0ca"} Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.259295 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.282346 4730 scope.go:117] "RemoveContainer" containerID="f01bb1ce3fdb271555994504b12501ddebf5c6f44d1ae92e8481bfe809a7796f" Mar 20 16:03:08 crc kubenswrapper[4730]: E0320 16:03:08.285856 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f01bb1ce3fdb271555994504b12501ddebf5c6f44d1ae92e8481bfe809a7796f\": container with ID starting with f01bb1ce3fdb271555994504b12501ddebf5c6f44d1ae92e8481bfe809a7796f not found: ID does not exist" containerID="f01bb1ce3fdb271555994504b12501ddebf5c6f44d1ae92e8481bfe809a7796f" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.285942 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f01bb1ce3fdb271555994504b12501ddebf5c6f44d1ae92e8481bfe809a7796f"} err="failed to get container status \"f01bb1ce3fdb271555994504b12501ddebf5c6f44d1ae92e8481bfe809a7796f\": rpc error: code = NotFound desc = could not find container \"f01bb1ce3fdb271555994504b12501ddebf5c6f44d1ae92e8481bfe809a7796f\": container with ID starting with f01bb1ce3fdb271555994504b12501ddebf5c6f44d1ae92e8481bfe809a7796f not found: ID does not exist" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.286021 4730 scope.go:117] "RemoveContainer" containerID="4ea91441433de2c9d6a90ef65cf2c1e62d0cba102e976dcb6c1e329ad4173188" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.296170 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.309035 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.326425 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.338425 4730 scope.go:117] "RemoveContainer" containerID="a81bd5c7f3aa448b067be4239c5895b4e08b8ed1cbb446ef78371ac5389a73a9" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.342797 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.359781 4730 scope.go:117] "RemoveContainer" containerID="4ea91441433de2c9d6a90ef65cf2c1e62d0cba102e976dcb6c1e329ad4173188" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.359881 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 20 16:03:08 crc kubenswrapper[4730]: E0320 16:03:08.360282 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3edc02b8-2451-4edf-a79d-fc86a078de83" containerName="nova-scheduler-scheduler" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.360298 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="3edc02b8-2451-4edf-a79d-fc86a078de83" containerName="nova-scheduler-scheduler" Mar 20 16:03:08 crc kubenswrapper[4730]: E0320 16:03:08.360307 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9bfb6c0-4971-4a58-aacc-17636a95b8a4" containerName="nova-metadata-log" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.360313 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9bfb6c0-4971-4a58-aacc-17636a95b8a4" containerName="nova-metadata-log" Mar 20 16:03:08 crc kubenswrapper[4730]: E0320 16:03:08.360326 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9bfb6c0-4971-4a58-aacc-17636a95b8a4" containerName="nova-metadata-metadata" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.360332 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9bfb6c0-4971-4a58-aacc-17636a95b8a4" containerName="nova-metadata-metadata" Mar 20 16:03:08 crc kubenswrapper[4730]: E0320 16:03:08.360345 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6941d556-3020-4344-b185-5d79cf68187c" containerName="nova-manage" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.360352 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="6941d556-3020-4344-b185-5d79cf68187c" containerName="nova-manage" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.360538 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9bfb6c0-4971-4a58-aacc-17636a95b8a4" containerName="nova-metadata-metadata" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.360549 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="6941d556-3020-4344-b185-5d79cf68187c" containerName="nova-manage" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.360572 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9bfb6c0-4971-4a58-aacc-17636a95b8a4" containerName="nova-metadata-log" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.360580 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="3edc02b8-2451-4edf-a79d-fc86a078de83" containerName="nova-scheduler-scheduler" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.361584 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.363696 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.363944 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 20 16:03:08 crc kubenswrapper[4730]: E0320 16:03:08.369557 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ea91441433de2c9d6a90ef65cf2c1e62d0cba102e976dcb6c1e329ad4173188\": container with ID starting with 4ea91441433de2c9d6a90ef65cf2c1e62d0cba102e976dcb6c1e329ad4173188 not found: ID does not exist" containerID="4ea91441433de2c9d6a90ef65cf2c1e62d0cba102e976dcb6c1e329ad4173188" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.369614 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ea91441433de2c9d6a90ef65cf2c1e62d0cba102e976dcb6c1e329ad4173188"} err="failed to get container status \"4ea91441433de2c9d6a90ef65cf2c1e62d0cba102e976dcb6c1e329ad4173188\": rpc error: code = NotFound desc = could not find container \"4ea91441433de2c9d6a90ef65cf2c1e62d0cba102e976dcb6c1e329ad4173188\": container with ID starting with 4ea91441433de2c9d6a90ef65cf2c1e62d0cba102e976dcb6c1e329ad4173188 not found: ID does not exist" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.369647 4730 scope.go:117] "RemoveContainer" containerID="a81bd5c7f3aa448b067be4239c5895b4e08b8ed1cbb446ef78371ac5389a73a9" Mar 20 16:03:08 crc kubenswrapper[4730]: E0320 16:03:08.370368 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a81bd5c7f3aa448b067be4239c5895b4e08b8ed1cbb446ef78371ac5389a73a9\": container with ID starting with a81bd5c7f3aa448b067be4239c5895b4e08b8ed1cbb446ef78371ac5389a73a9 not found: ID does not exist" containerID="a81bd5c7f3aa448b067be4239c5895b4e08b8ed1cbb446ef78371ac5389a73a9" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.370426 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a81bd5c7f3aa448b067be4239c5895b4e08b8ed1cbb446ef78371ac5389a73a9"} err="failed to get container status \"a81bd5c7f3aa448b067be4239c5895b4e08b8ed1cbb446ef78371ac5389a73a9\": rpc error: code = NotFound desc = could not find container \"a81bd5c7f3aa448b067be4239c5895b4e08b8ed1cbb446ef78371ac5389a73a9\": container with ID starting with a81bd5c7f3aa448b067be4239c5895b4e08b8ed1cbb446ef78371ac5389a73a9 not found: ID does not exist" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.379861 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.389229 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.393682 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.412700 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.426472 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.490624 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a58f453e-84d8-47b1-8740-406f92c4ca79-logs\") pod \"nova-metadata-0\" (UID: \"a58f453e-84d8-47b1-8740-406f92c4ca79\") " pod="openstack/nova-metadata-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.490743 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4deff063-ecb8-4cf2-8e94-45ab62a613bc-config-data\") pod \"nova-scheduler-0\" (UID: \"4deff063-ecb8-4cf2-8e94-45ab62a613bc\") " pod="openstack/nova-scheduler-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.490767 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a58f453e-84d8-47b1-8740-406f92c4ca79-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a58f453e-84d8-47b1-8740-406f92c4ca79\") " pod="openstack/nova-metadata-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.490784 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a58f453e-84d8-47b1-8740-406f92c4ca79-config-data\") pod \"nova-metadata-0\" (UID: \"a58f453e-84d8-47b1-8740-406f92c4ca79\") " pod="openstack/nova-metadata-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.490956 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a58f453e-84d8-47b1-8740-406f92c4ca79-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a58f453e-84d8-47b1-8740-406f92c4ca79\") " pod="openstack/nova-metadata-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.491004 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlrsq\" (UniqueName: \"kubernetes.io/projected/a58f453e-84d8-47b1-8740-406f92c4ca79-kube-api-access-vlrsq\") pod \"nova-metadata-0\" (UID: \"a58f453e-84d8-47b1-8740-406f92c4ca79\") " pod="openstack/nova-metadata-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.491101 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4deff063-ecb8-4cf2-8e94-45ab62a613bc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4deff063-ecb8-4cf2-8e94-45ab62a613bc\") " pod="openstack/nova-scheduler-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.491307 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kqgd\" (UniqueName: \"kubernetes.io/projected/4deff063-ecb8-4cf2-8e94-45ab62a613bc-kube-api-access-5kqgd\") pod \"nova-scheduler-0\" (UID: \"4deff063-ecb8-4cf2-8e94-45ab62a613bc\") " pod="openstack/nova-scheduler-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.593458 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4deff063-ecb8-4cf2-8e94-45ab62a613bc-config-data\") pod \"nova-scheduler-0\" (UID: \"4deff063-ecb8-4cf2-8e94-45ab62a613bc\") " pod="openstack/nova-scheduler-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.593519 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a58f453e-84d8-47b1-8740-406f92c4ca79-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a58f453e-84d8-47b1-8740-406f92c4ca79\") " pod="openstack/nova-metadata-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.593546 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a58f453e-84d8-47b1-8740-406f92c4ca79-config-data\") pod \"nova-metadata-0\" (UID: \"a58f453e-84d8-47b1-8740-406f92c4ca79\") " pod="openstack/nova-metadata-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.593600 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a58f453e-84d8-47b1-8740-406f92c4ca79-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a58f453e-84d8-47b1-8740-406f92c4ca79\") " pod="openstack/nova-metadata-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.593625 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlrsq\" (UniqueName: \"kubernetes.io/projected/a58f453e-84d8-47b1-8740-406f92c4ca79-kube-api-access-vlrsq\") pod \"nova-metadata-0\" (UID: \"a58f453e-84d8-47b1-8740-406f92c4ca79\") " pod="openstack/nova-metadata-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.593674 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4deff063-ecb8-4cf2-8e94-45ab62a613bc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4deff063-ecb8-4cf2-8e94-45ab62a613bc\") " pod="openstack/nova-scheduler-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.593746 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kqgd\" (UniqueName: \"kubernetes.io/projected/4deff063-ecb8-4cf2-8e94-45ab62a613bc-kube-api-access-5kqgd\") pod \"nova-scheduler-0\" (UID: \"4deff063-ecb8-4cf2-8e94-45ab62a613bc\") " pod="openstack/nova-scheduler-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.593849 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a58f453e-84d8-47b1-8740-406f92c4ca79-logs\") pod \"nova-metadata-0\" (UID: \"a58f453e-84d8-47b1-8740-406f92c4ca79\") " pod="openstack/nova-metadata-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.594504 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a58f453e-84d8-47b1-8740-406f92c4ca79-logs\") pod \"nova-metadata-0\" (UID: \"a58f453e-84d8-47b1-8740-406f92c4ca79\") " pod="openstack/nova-metadata-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.597592 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a58f453e-84d8-47b1-8740-406f92c4ca79-config-data\") pod \"nova-metadata-0\" (UID: \"a58f453e-84d8-47b1-8740-406f92c4ca79\") " pod="openstack/nova-metadata-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.598736 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4deff063-ecb8-4cf2-8e94-45ab62a613bc-config-data\") pod \"nova-scheduler-0\" (UID: \"4deff063-ecb8-4cf2-8e94-45ab62a613bc\") " pod="openstack/nova-scheduler-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.599070 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a58f453e-84d8-47b1-8740-406f92c4ca79-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a58f453e-84d8-47b1-8740-406f92c4ca79\") " pod="openstack/nova-metadata-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.600820 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4deff063-ecb8-4cf2-8e94-45ab62a613bc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4deff063-ecb8-4cf2-8e94-45ab62a613bc\") " pod="openstack/nova-scheduler-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.601740 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a58f453e-84d8-47b1-8740-406f92c4ca79-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a58f453e-84d8-47b1-8740-406f92c4ca79\") " pod="openstack/nova-metadata-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.617927 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlrsq\" (UniqueName: \"kubernetes.io/projected/a58f453e-84d8-47b1-8740-406f92c4ca79-kube-api-access-vlrsq\") pod \"nova-metadata-0\" (UID: \"a58f453e-84d8-47b1-8740-406f92c4ca79\") " pod="openstack/nova-metadata-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.618037 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kqgd\" (UniqueName: \"kubernetes.io/projected/4deff063-ecb8-4cf2-8e94-45ab62a613bc-kube-api-access-5kqgd\") pod \"nova-scheduler-0\" (UID: \"4deff063-ecb8-4cf2-8e94-45ab62a613bc\") " pod="openstack/nova-scheduler-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.694023 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 16:03:08 crc kubenswrapper[4730]: I0320 16:03:08.714155 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 16:03:09 crc kubenswrapper[4730]: I0320 16:03:09.178869 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 16:03:09 crc kubenswrapper[4730]: W0320 16:03:09.188443 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda58f453e_84d8_47b1_8740_406f92c4ca79.slice/crio-8c4113a3a395f61d82a4e3f3327cc72aaec3ee3b7ff952100b967b5e75bad9d9 WatchSource:0}: Error finding container 8c4113a3a395f61d82a4e3f3327cc72aaec3ee3b7ff952100b967b5e75bad9d9: Status 404 returned error can't find the container with id 8c4113a3a395f61d82a4e3f3327cc72aaec3ee3b7ff952100b967b5e75bad9d9 Mar 20 16:03:09 crc kubenswrapper[4730]: I0320 16:03:09.259321 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 16:03:09 crc kubenswrapper[4730]: I0320 16:03:09.273890 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a58f453e-84d8-47b1-8740-406f92c4ca79","Type":"ContainerStarted","Data":"8c4113a3a395f61d82a4e3f3327cc72aaec3ee3b7ff952100b967b5e75bad9d9"} Mar 20 16:03:09 crc kubenswrapper[4730]: W0320 16:03:09.279704 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4deff063_ecb8_4cf2_8e94_45ab62a613bc.slice/crio-21158f74c1175b4725450a9060010e126ee9a4812de984c4772223cfd04c4b44 WatchSource:0}: Error finding container 21158f74c1175b4725450a9060010e126ee9a4812de984c4772223cfd04c4b44: Status 404 returned error can't find the container with id 21158f74c1175b4725450a9060010e126ee9a4812de984c4772223cfd04c4b44 Mar 20 16:03:09 crc kubenswrapper[4730]: I0320 16:03:09.552308 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3edc02b8-2451-4edf-a79d-fc86a078de83" path="/var/lib/kubelet/pods/3edc02b8-2451-4edf-a79d-fc86a078de83/volumes" Mar 20 16:03:09 crc kubenswrapper[4730]: I0320 16:03:09.553175 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9bfb6c0-4971-4a58-aacc-17636a95b8a4" path="/var/lib/kubelet/pods/c9bfb6c0-4971-4a58-aacc-17636a95b8a4/volumes" Mar 20 16:03:09 crc kubenswrapper[4730]: I0320 16:03:09.730092 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 16:03:09 crc kubenswrapper[4730]: I0320 16:03:09.826551 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a875c7b4-22fb-4b91-803c-09a7a439aea1-public-tls-certs\") pod \"a875c7b4-22fb-4b91-803c-09a7a439aea1\" (UID: \"a875c7b4-22fb-4b91-803c-09a7a439aea1\") " Mar 20 16:03:09 crc kubenswrapper[4730]: I0320 16:03:09.826624 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a875c7b4-22fb-4b91-803c-09a7a439aea1-config-data\") pod \"a875c7b4-22fb-4b91-803c-09a7a439aea1\" (UID: \"a875c7b4-22fb-4b91-803c-09a7a439aea1\") " Mar 20 16:03:09 crc kubenswrapper[4730]: I0320 16:03:09.826665 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a875c7b4-22fb-4b91-803c-09a7a439aea1-combined-ca-bundle\") pod \"a875c7b4-22fb-4b91-803c-09a7a439aea1\" (UID: \"a875c7b4-22fb-4b91-803c-09a7a439aea1\") " Mar 20 16:03:09 crc kubenswrapper[4730]: I0320 16:03:09.826699 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a875c7b4-22fb-4b91-803c-09a7a439aea1-logs\") pod \"a875c7b4-22fb-4b91-803c-09a7a439aea1\" (UID: \"a875c7b4-22fb-4b91-803c-09a7a439aea1\") " Mar 20 16:03:09 crc kubenswrapper[4730]: I0320 16:03:09.826773 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf45d\" (UniqueName: \"kubernetes.io/projected/a875c7b4-22fb-4b91-803c-09a7a439aea1-kube-api-access-cf45d\") pod \"a875c7b4-22fb-4b91-803c-09a7a439aea1\" (UID: \"a875c7b4-22fb-4b91-803c-09a7a439aea1\") " Mar 20 16:03:09 crc kubenswrapper[4730]: I0320 16:03:09.826884 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a875c7b4-22fb-4b91-803c-09a7a439aea1-internal-tls-certs\") pod \"a875c7b4-22fb-4b91-803c-09a7a439aea1\" (UID: \"a875c7b4-22fb-4b91-803c-09a7a439aea1\") " Mar 20 16:03:09 crc kubenswrapper[4730]: I0320 16:03:09.827333 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a875c7b4-22fb-4b91-803c-09a7a439aea1-logs" (OuterVolumeSpecName: "logs") pod "a875c7b4-22fb-4b91-803c-09a7a439aea1" (UID: "a875c7b4-22fb-4b91-803c-09a7a439aea1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:03:09 crc kubenswrapper[4730]: I0320 16:03:09.827660 4730 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a875c7b4-22fb-4b91-803c-09a7a439aea1-logs\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:09 crc kubenswrapper[4730]: I0320 16:03:09.833623 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a875c7b4-22fb-4b91-803c-09a7a439aea1-kube-api-access-cf45d" (OuterVolumeSpecName: "kube-api-access-cf45d") pod "a875c7b4-22fb-4b91-803c-09a7a439aea1" (UID: "a875c7b4-22fb-4b91-803c-09a7a439aea1"). InnerVolumeSpecName "kube-api-access-cf45d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:03:09 crc kubenswrapper[4730]: I0320 16:03:09.862197 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a875c7b4-22fb-4b91-803c-09a7a439aea1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a875c7b4-22fb-4b91-803c-09a7a439aea1" (UID: "a875c7b4-22fb-4b91-803c-09a7a439aea1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:03:09 crc kubenswrapper[4730]: I0320 16:03:09.884383 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a875c7b4-22fb-4b91-803c-09a7a439aea1-config-data" (OuterVolumeSpecName: "config-data") pod "a875c7b4-22fb-4b91-803c-09a7a439aea1" (UID: "a875c7b4-22fb-4b91-803c-09a7a439aea1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:03:09 crc kubenswrapper[4730]: I0320 16:03:09.886331 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a875c7b4-22fb-4b91-803c-09a7a439aea1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a875c7b4-22fb-4b91-803c-09a7a439aea1" (UID: "a875c7b4-22fb-4b91-803c-09a7a439aea1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:03:09 crc kubenswrapper[4730]: I0320 16:03:09.919376 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a875c7b4-22fb-4b91-803c-09a7a439aea1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a875c7b4-22fb-4b91-803c-09a7a439aea1" (UID: "a875c7b4-22fb-4b91-803c-09a7a439aea1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:03:09 crc kubenswrapper[4730]: I0320 16:03:09.930366 4730 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a875c7b4-22fb-4b91-803c-09a7a439aea1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:09 crc kubenswrapper[4730]: I0320 16:03:09.930389 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a875c7b4-22fb-4b91-803c-09a7a439aea1-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:09 crc kubenswrapper[4730]: I0320 16:03:09.930398 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a875c7b4-22fb-4b91-803c-09a7a439aea1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:09 crc kubenswrapper[4730]: I0320 16:03:09.930407 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf45d\" (UniqueName: \"kubernetes.io/projected/a875c7b4-22fb-4b91-803c-09a7a439aea1-kube-api-access-cf45d\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:09 crc kubenswrapper[4730]: I0320 16:03:09.930434 4730 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a875c7b4-22fb-4b91-803c-09a7a439aea1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.296011 4730 generic.go:334] "Generic (PLEG): container finished" podID="a875c7b4-22fb-4b91-803c-09a7a439aea1" containerID="8ad5725e9930431c50bd1862d3f763c6380098c554ce816d54c8f41bdc9481c4" exitCode=0 Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.296095 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.296118 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a875c7b4-22fb-4b91-803c-09a7a439aea1","Type":"ContainerDied","Data":"8ad5725e9930431c50bd1862d3f763c6380098c554ce816d54c8f41bdc9481c4"} Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.299416 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a875c7b4-22fb-4b91-803c-09a7a439aea1","Type":"ContainerDied","Data":"b02119858ae3b92ae86beccde670787e102f4edc1a93a7bf0ae26e53cf88792e"} Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.299441 4730 scope.go:117] "RemoveContainer" containerID="8ad5725e9930431c50bd1862d3f763c6380098c554ce816d54c8f41bdc9481c4" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.305879 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a58f453e-84d8-47b1-8740-406f92c4ca79","Type":"ContainerStarted","Data":"28085b89ffc4ce452ccb90eb0e4869438568347b071b197104cf4fffebe470c3"} Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.305925 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a58f453e-84d8-47b1-8740-406f92c4ca79","Type":"ContainerStarted","Data":"5b95a18c7e8212d8f5c84cac299872f1049c51407e0f916a9069755e43ecc51f"} Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.309023 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4deff063-ecb8-4cf2-8e94-45ab62a613bc","Type":"ContainerStarted","Data":"d2a3b9bce5ad193f1702a85ebe0d22aa011263d2d3114ec660ae0fb7d74ed20b"} Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.309064 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4deff063-ecb8-4cf2-8e94-45ab62a613bc","Type":"ContainerStarted","Data":"21158f74c1175b4725450a9060010e126ee9a4812de984c4772223cfd04c4b44"} Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.340984 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.340962723 podStartE2EDuration="2.340962723s" podCreationTimestamp="2026-03-20 16:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:03:10.329887704 +0000 UTC m=+1449.543259153" watchObservedRunningTime="2026-03-20 16:03:10.340962723 +0000 UTC m=+1449.554334092" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.343462 4730 scope.go:117] "RemoveContainer" containerID="5b5163f015343a503b8bb1265422ad47154d41b97a20358ae2eebcfb9b02968f" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.359852 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.384623 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.390731 4730 scope.go:117] "RemoveContainer" containerID="8ad5725e9930431c50bd1862d3f763c6380098c554ce816d54c8f41bdc9481c4" Mar 20 16:03:10 crc kubenswrapper[4730]: E0320 16:03:10.391191 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ad5725e9930431c50bd1862d3f763c6380098c554ce816d54c8f41bdc9481c4\": container with ID starting with 8ad5725e9930431c50bd1862d3f763c6380098c554ce816d54c8f41bdc9481c4 not found: ID does not exist" containerID="8ad5725e9930431c50bd1862d3f763c6380098c554ce816d54c8f41bdc9481c4" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.391319 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ad5725e9930431c50bd1862d3f763c6380098c554ce816d54c8f41bdc9481c4"} err="failed to get container status \"8ad5725e9930431c50bd1862d3f763c6380098c554ce816d54c8f41bdc9481c4\": rpc error: code = NotFound desc = could not find container \"8ad5725e9930431c50bd1862d3f763c6380098c554ce816d54c8f41bdc9481c4\": container with ID starting with 8ad5725e9930431c50bd1862d3f763c6380098c554ce816d54c8f41bdc9481c4 not found: ID does not exist" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.391351 4730 scope.go:117] "RemoveContainer" containerID="5b5163f015343a503b8bb1265422ad47154d41b97a20358ae2eebcfb9b02968f" Mar 20 16:03:10 crc kubenswrapper[4730]: E0320 16:03:10.392653 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b5163f015343a503b8bb1265422ad47154d41b97a20358ae2eebcfb9b02968f\": container with ID starting with 5b5163f015343a503b8bb1265422ad47154d41b97a20358ae2eebcfb9b02968f not found: ID does not exist" containerID="5b5163f015343a503b8bb1265422ad47154d41b97a20358ae2eebcfb9b02968f" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.392676 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b5163f015343a503b8bb1265422ad47154d41b97a20358ae2eebcfb9b02968f"} err="failed to get container status \"5b5163f015343a503b8bb1265422ad47154d41b97a20358ae2eebcfb9b02968f\": rpc error: code = NotFound desc = could not find container \"5b5163f015343a503b8bb1265422ad47154d41b97a20358ae2eebcfb9b02968f\": container with ID starting with 5b5163f015343a503b8bb1265422ad47154d41b97a20358ae2eebcfb9b02968f not found: ID does not exist" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.397105 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 20 16:03:10 crc kubenswrapper[4730]: E0320 16:03:10.397514 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a875c7b4-22fb-4b91-803c-09a7a439aea1" containerName="nova-api-api" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.397532 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="a875c7b4-22fb-4b91-803c-09a7a439aea1" containerName="nova-api-api" Mar 20 16:03:10 crc kubenswrapper[4730]: E0320 16:03:10.397564 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a875c7b4-22fb-4b91-803c-09a7a439aea1" containerName="nova-api-log" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.397571 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="a875c7b4-22fb-4b91-803c-09a7a439aea1" containerName="nova-api-log" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.397773 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="a875c7b4-22fb-4b91-803c-09a7a439aea1" containerName="nova-api-api" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.397806 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="a875c7b4-22fb-4b91-803c-09a7a439aea1" containerName="nova-api-log" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.398827 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.403077 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.404639 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.404835 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.410517 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.410494271 podStartE2EDuration="2.410494271s" podCreationTimestamp="2026-03-20 16:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:03:10.377427391 +0000 UTC m=+1449.590798770" watchObservedRunningTime="2026-03-20 16:03:10.410494271 +0000 UTC m=+1449.623865650" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.434288 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.543742 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2380321d-63e0-40a2-8ca4-5780cba46259-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2380321d-63e0-40a2-8ca4-5780cba46259\") " pod="openstack/nova-api-0" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.543963 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2380321d-63e0-40a2-8ca4-5780cba46259-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2380321d-63e0-40a2-8ca4-5780cba46259\") " pod="openstack/nova-api-0" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.544111 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2380321d-63e0-40a2-8ca4-5780cba46259-logs\") pod \"nova-api-0\" (UID: \"2380321d-63e0-40a2-8ca4-5780cba46259\") " pod="openstack/nova-api-0" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.544280 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trrdt\" (UniqueName: \"kubernetes.io/projected/2380321d-63e0-40a2-8ca4-5780cba46259-kube-api-access-trrdt\") pod \"nova-api-0\" (UID: \"2380321d-63e0-40a2-8ca4-5780cba46259\") " pod="openstack/nova-api-0" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.544400 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2380321d-63e0-40a2-8ca4-5780cba46259-public-tls-certs\") pod \"nova-api-0\" (UID: \"2380321d-63e0-40a2-8ca4-5780cba46259\") " pod="openstack/nova-api-0" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.544480 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2380321d-63e0-40a2-8ca4-5780cba46259-config-data\") pod \"nova-api-0\" (UID: \"2380321d-63e0-40a2-8ca4-5780cba46259\") " pod="openstack/nova-api-0" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.646123 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2380321d-63e0-40a2-8ca4-5780cba46259-public-tls-certs\") pod \"nova-api-0\" (UID: \"2380321d-63e0-40a2-8ca4-5780cba46259\") " pod="openstack/nova-api-0" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.646428 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2380321d-63e0-40a2-8ca4-5780cba46259-config-data\") pod \"nova-api-0\" (UID: \"2380321d-63e0-40a2-8ca4-5780cba46259\") " pod="openstack/nova-api-0" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.646701 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2380321d-63e0-40a2-8ca4-5780cba46259-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2380321d-63e0-40a2-8ca4-5780cba46259\") " pod="openstack/nova-api-0" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.646919 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2380321d-63e0-40a2-8ca4-5780cba46259-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2380321d-63e0-40a2-8ca4-5780cba46259\") " pod="openstack/nova-api-0" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.647428 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2380321d-63e0-40a2-8ca4-5780cba46259-logs\") pod \"nova-api-0\" (UID: \"2380321d-63e0-40a2-8ca4-5780cba46259\") " pod="openstack/nova-api-0" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.647770 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trrdt\" (UniqueName: \"kubernetes.io/projected/2380321d-63e0-40a2-8ca4-5780cba46259-kube-api-access-trrdt\") pod \"nova-api-0\" (UID: \"2380321d-63e0-40a2-8ca4-5780cba46259\") " pod="openstack/nova-api-0" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.648808 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2380321d-63e0-40a2-8ca4-5780cba46259-logs\") pod \"nova-api-0\" (UID: \"2380321d-63e0-40a2-8ca4-5780cba46259\") " pod="openstack/nova-api-0" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.649924 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2380321d-63e0-40a2-8ca4-5780cba46259-public-tls-certs\") pod \"nova-api-0\" (UID: \"2380321d-63e0-40a2-8ca4-5780cba46259\") " pod="openstack/nova-api-0" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.652775 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2380321d-63e0-40a2-8ca4-5780cba46259-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2380321d-63e0-40a2-8ca4-5780cba46259\") " pod="openstack/nova-api-0" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.654402 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2380321d-63e0-40a2-8ca4-5780cba46259-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2380321d-63e0-40a2-8ca4-5780cba46259\") " pod="openstack/nova-api-0" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.656233 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2380321d-63e0-40a2-8ca4-5780cba46259-config-data\") pod \"nova-api-0\" (UID: \"2380321d-63e0-40a2-8ca4-5780cba46259\") " pod="openstack/nova-api-0" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.668043 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trrdt\" (UniqueName: \"kubernetes.io/projected/2380321d-63e0-40a2-8ca4-5780cba46259-kube-api-access-trrdt\") pod \"nova-api-0\" (UID: \"2380321d-63e0-40a2-8ca4-5780cba46259\") " pod="openstack/nova-api-0" Mar 20 16:03:10 crc kubenswrapper[4730]: I0320 16:03:10.725706 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 16:03:11 crc kubenswrapper[4730]: I0320 16:03:11.204821 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 16:03:11 crc kubenswrapper[4730]: W0320 16:03:11.206108 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2380321d_63e0_40a2_8ca4_5780cba46259.slice/crio-cdfcbf8df97ae3fb78b268f7faedaffc2024b946380448f550e841b6a860cf3e WatchSource:0}: Error finding container cdfcbf8df97ae3fb78b268f7faedaffc2024b946380448f550e841b6a860cf3e: Status 404 returned error can't find the container with id cdfcbf8df97ae3fb78b268f7faedaffc2024b946380448f550e841b6a860cf3e Mar 20 16:03:11 crc kubenswrapper[4730]: I0320 16:03:11.329862 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2333c0f3-d6ce-405f-b8c8-755be42ba74b","Type":"ContainerStarted","Data":"beb9c38840f81abf1794ad7a37e5012911824dfd945cddb58ee8a2a4d957578d"} Mar 20 16:03:11 crc kubenswrapper[4730]: I0320 16:03:11.330340 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 16:03:11 crc kubenswrapper[4730]: I0320 16:03:11.332910 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2380321d-63e0-40a2-8ca4-5780cba46259","Type":"ContainerStarted","Data":"cdfcbf8df97ae3fb78b268f7faedaffc2024b946380448f550e841b6a860cf3e"} Mar 20 16:03:11 crc kubenswrapper[4730]: I0320 16:03:11.365831 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.318650157 podStartE2EDuration="8.365813702s" podCreationTimestamp="2026-03-20 16:03:03 +0000 UTC" firstStartedPulling="2026-03-20 16:03:04.471970169 +0000 UTC m=+1443.685341528" lastFinishedPulling="2026-03-20 16:03:10.519133714 +0000 UTC m=+1449.732505073" observedRunningTime="2026-03-20 16:03:11.357882214 +0000 UTC m=+1450.571253583" watchObservedRunningTime="2026-03-20 16:03:11.365813702 +0000 UTC m=+1450.579185071" Mar 20 16:03:11 crc kubenswrapper[4730]: I0320 16:03:11.559559 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a875c7b4-22fb-4b91-803c-09a7a439aea1" path="/var/lib/kubelet/pods/a875c7b4-22fb-4b91-803c-09a7a439aea1/volumes" Mar 20 16:03:12 crc kubenswrapper[4730]: I0320 16:03:12.343777 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2380321d-63e0-40a2-8ca4-5780cba46259","Type":"ContainerStarted","Data":"ba5e35851950f100dad51028dcbb21bbfef539fc12fbfe9a41f99f5a8cf0301b"} Mar 20 16:03:12 crc kubenswrapper[4730]: I0320 16:03:12.344130 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2380321d-63e0-40a2-8ca4-5780cba46259","Type":"ContainerStarted","Data":"ffc44d330d32f88758a8bfabb235a3b27c595d5f1479c1e6ef4eaf45b82a0cd6"} Mar 20 16:03:12 crc kubenswrapper[4730]: I0320 16:03:12.373729 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.373709084 podStartE2EDuration="2.373709084s" podCreationTimestamp="2026-03-20 16:03:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:03:12.364059407 +0000 UTC m=+1451.577430786" watchObservedRunningTime="2026-03-20 16:03:12.373709084 +0000 UTC m=+1451.587080443" Mar 20 16:03:13 crc kubenswrapper[4730]: I0320 16:03:13.714752 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 20 16:03:18 crc kubenswrapper[4730]: I0320 16:03:18.695093 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 20 16:03:18 crc kubenswrapper[4730]: I0320 16:03:18.695622 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 20 16:03:18 crc kubenswrapper[4730]: I0320 16:03:18.714414 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 20 16:03:18 crc kubenswrapper[4730]: I0320 16:03:18.740677 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 20 16:03:19 crc kubenswrapper[4730]: I0320 16:03:19.474615 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 20 16:03:19 crc kubenswrapper[4730]: I0320 16:03:19.709365 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a58f453e-84d8-47b1-8740-406f92c4ca79" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.227:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 16:03:19 crc kubenswrapper[4730]: I0320 16:03:19.709411 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a58f453e-84d8-47b1-8740-406f92c4ca79" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.227:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 16:03:20 crc kubenswrapper[4730]: I0320 16:03:20.726655 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 16:03:20 crc kubenswrapper[4730]: I0320 16:03:20.726999 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 16:03:21 crc kubenswrapper[4730]: I0320 16:03:21.740424 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2380321d-63e0-40a2-8ca4-5780cba46259" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.229:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 16:03:21 crc kubenswrapper[4730]: I0320 16:03:21.740470 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2380321d-63e0-40a2-8ca4-5780cba46259" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.229:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 16:03:26 crc kubenswrapper[4730]: I0320 16:03:26.695169 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 20 16:03:26 crc kubenswrapper[4730]: I0320 16:03:26.695929 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 20 16:03:28 crc kubenswrapper[4730]: I0320 16:03:28.700194 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 20 16:03:28 crc kubenswrapper[4730]: I0320 16:03:28.702552 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 20 16:03:28 crc kubenswrapper[4730]: I0320 16:03:28.706194 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 20 16:03:28 crc kubenswrapper[4730]: I0320 16:03:28.725892 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 20 16:03:28 crc kubenswrapper[4730]: I0320 16:03:28.726870 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 20 16:03:29 crc kubenswrapper[4730]: I0320 16:03:29.554001 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 20 16:03:30 crc kubenswrapper[4730]: I0320 16:03:30.736654 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 20 16:03:30 crc kubenswrapper[4730]: I0320 16:03:30.737180 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 20 16:03:30 crc kubenswrapper[4730]: I0320 16:03:30.748431 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 20 16:03:31 crc kubenswrapper[4730]: I0320 16:03:31.646267 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 20 16:03:34 crc kubenswrapper[4730]: I0320 16:03:34.003073 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 20 16:03:43 crc kubenswrapper[4730]: I0320 16:03:43.762430 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 16:03:44 crc kubenswrapper[4730]: I0320 16:03:44.673490 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 16:03:47 crc kubenswrapper[4730]: I0320 16:03:47.128146 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="dfd9111c-a9f4-4874-91fc-c0ef68ae09a3" containerName="rabbitmq" containerID="cri-o://3a783d296547ab247634b62ed131b57fa9392453e5aadc95036d56c15ea1686f" gracePeriod=604797 Mar 20 16:03:47 crc kubenswrapper[4730]: I0320 16:03:47.825212 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="8043f69c-832c-4afa-a9b9-211507664805" containerName="rabbitmq" containerID="cri-o://f4ff5614730f4bee870729b9dbea193a82cb2fbf2b64a4650757a12a3469fc3b" gracePeriod=604797 Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.752450 4730 generic.go:334] "Generic (PLEG): container finished" podID="dfd9111c-a9f4-4874-91fc-c0ef68ae09a3" containerID="3a783d296547ab247634b62ed131b57fa9392453e5aadc95036d56c15ea1686f" exitCode=0 Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.752547 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3","Type":"ContainerDied","Data":"3a783d296547ab247634b62ed131b57fa9392453e5aadc95036d56c15ea1686f"} Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.752870 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3","Type":"ContainerDied","Data":"ddda948591892d2a138c1c22bc7cf5e93ad382615cc9ff618b810cf784bacaf9"} Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.752903 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddda948591892d2a138c1c22bc7cf5e93ad382615cc9ff618b810cf784bacaf9" Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.807567 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.857838 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-config-data\") pod \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.857918 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.857981 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-pod-info\") pod \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.858005 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-rabbitmq-plugins\") pod \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.858027 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-rabbitmq-confd\") pod \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.858066 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-rabbitmq-tls\") pod \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.858127 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-erlang-cookie-secret\") pod \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.858153 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-plugins-conf\") pod \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.858200 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-server-conf\") pod \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.858233 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-rabbitmq-erlang-cookie\") pod \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.858269 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4jnf\" (UniqueName: \"kubernetes.io/projected/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-kube-api-access-c4jnf\") pod \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\" (UID: \"dfd9111c-a9f4-4874-91fc-c0ef68ae09a3\") " Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.860456 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "dfd9111c-a9f4-4874-91fc-c0ef68ae09a3" (UID: "dfd9111c-a9f4-4874-91fc-c0ef68ae09a3"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.861614 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "dfd9111c-a9f4-4874-91fc-c0ef68ae09a3" (UID: "dfd9111c-a9f4-4874-91fc-c0ef68ae09a3"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.861668 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "dfd9111c-a9f4-4874-91fc-c0ef68ae09a3" (UID: "dfd9111c-a9f4-4874-91fc-c0ef68ae09a3"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.866733 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-kube-api-access-c4jnf" (OuterVolumeSpecName: "kube-api-access-c4jnf") pod "dfd9111c-a9f4-4874-91fc-c0ef68ae09a3" (UID: "dfd9111c-a9f4-4874-91fc-c0ef68ae09a3"). InnerVolumeSpecName "kube-api-access-c4jnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.867061 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "dfd9111c-a9f4-4874-91fc-c0ef68ae09a3" (UID: "dfd9111c-a9f4-4874-91fc-c0ef68ae09a3"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.868366 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-pod-info" (OuterVolumeSpecName: "pod-info") pod "dfd9111c-a9f4-4874-91fc-c0ef68ae09a3" (UID: "dfd9111c-a9f4-4874-91fc-c0ef68ae09a3"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.883236 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "dfd9111c-a9f4-4874-91fc-c0ef68ae09a3" (UID: "dfd9111c-a9f4-4874-91fc-c0ef68ae09a3"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.892501 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "dfd9111c-a9f4-4874-91fc-c0ef68ae09a3" (UID: "dfd9111c-a9f4-4874-91fc-c0ef68ae09a3"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.898911 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-config-data" (OuterVolumeSpecName: "config-data") pod "dfd9111c-a9f4-4874-91fc-c0ef68ae09a3" (UID: "dfd9111c-a9f4-4874-91fc-c0ef68ae09a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.960853 4730 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.960885 4730 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-pod-info\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.960895 4730 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.960904 4730 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.960914 4730 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.960925 4730 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.960933 4730 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.960942 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4jnf\" (UniqueName: \"kubernetes.io/projected/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-kube-api-access-c4jnf\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.960950 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.973472 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-server-conf" (OuterVolumeSpecName: "server-conf") pod "dfd9111c-a9f4-4874-91fc-c0ef68ae09a3" (UID: "dfd9111c-a9f4-4874-91fc-c0ef68ae09a3"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:03:48 crc kubenswrapper[4730]: I0320 16:03:48.982963 4730 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.062726 4730 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.063042 4730 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-server-conf\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.065930 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "dfd9111c-a9f4-4874-91fc-c0ef68ae09a3" (UID: "dfd9111c-a9f4-4874-91fc-c0ef68ae09a3"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.164902 4730 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.337719 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.499020 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8043f69c-832c-4afa-a9b9-211507664805-rabbitmq-plugins\") pod \"8043f69c-832c-4afa-a9b9-211507664805\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.499113 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8043f69c-832c-4afa-a9b9-211507664805-pod-info\") pod \"8043f69c-832c-4afa-a9b9-211507664805\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.499185 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8043f69c-832c-4afa-a9b9-211507664805-rabbitmq-confd\") pod \"8043f69c-832c-4afa-a9b9-211507664805\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.499203 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8043f69c-832c-4afa-a9b9-211507664805-plugins-conf\") pod \"8043f69c-832c-4afa-a9b9-211507664805\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.499267 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8043f69c-832c-4afa-a9b9-211507664805-rabbitmq-erlang-cookie\") pod \"8043f69c-832c-4afa-a9b9-211507664805\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.499291 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8043f69c-832c-4afa-a9b9-211507664805-rabbitmq-tls\") pod \"8043f69c-832c-4afa-a9b9-211507664805\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.499339 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8043f69c-832c-4afa-a9b9-211507664805-erlang-cookie-secret\") pod \"8043f69c-832c-4afa-a9b9-211507664805\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.499363 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8043f69c-832c-4afa-a9b9-211507664805-server-conf\") pod \"8043f69c-832c-4afa-a9b9-211507664805\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.499380 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"8043f69c-832c-4afa-a9b9-211507664805\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.499423 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vth2k\" (UniqueName: \"kubernetes.io/projected/8043f69c-832c-4afa-a9b9-211507664805-kube-api-access-vth2k\") pod \"8043f69c-832c-4afa-a9b9-211507664805\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.499448 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8043f69c-832c-4afa-a9b9-211507664805-config-data\") pod \"8043f69c-832c-4afa-a9b9-211507664805\" (UID: \"8043f69c-832c-4afa-a9b9-211507664805\") " Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.499504 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8043f69c-832c-4afa-a9b9-211507664805-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "8043f69c-832c-4afa-a9b9-211507664805" (UID: "8043f69c-832c-4afa-a9b9-211507664805"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.499853 4730 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8043f69c-832c-4afa-a9b9-211507664805-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.500070 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8043f69c-832c-4afa-a9b9-211507664805-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "8043f69c-832c-4afa-a9b9-211507664805" (UID: "8043f69c-832c-4afa-a9b9-211507664805"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.500212 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8043f69c-832c-4afa-a9b9-211507664805-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "8043f69c-832c-4afa-a9b9-211507664805" (UID: "8043f69c-832c-4afa-a9b9-211507664805"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.503978 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8043f69c-832c-4afa-a9b9-211507664805-kube-api-access-vth2k" (OuterVolumeSpecName: "kube-api-access-vth2k") pod "8043f69c-832c-4afa-a9b9-211507664805" (UID: "8043f69c-832c-4afa-a9b9-211507664805"). InnerVolumeSpecName "kube-api-access-vth2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.505718 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8043f69c-832c-4afa-a9b9-211507664805-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "8043f69c-832c-4afa-a9b9-211507664805" (UID: "8043f69c-832c-4afa-a9b9-211507664805"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.509215 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8043f69c-832c-4afa-a9b9-211507664805-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "8043f69c-832c-4afa-a9b9-211507664805" (UID: "8043f69c-832c-4afa-a9b9-211507664805"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.511559 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "8043f69c-832c-4afa-a9b9-211507664805" (UID: "8043f69c-832c-4afa-a9b9-211507664805"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.520600 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/8043f69c-832c-4afa-a9b9-211507664805-pod-info" (OuterVolumeSpecName: "pod-info") pod "8043f69c-832c-4afa-a9b9-211507664805" (UID: "8043f69c-832c-4afa-a9b9-211507664805"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.584441 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8043f69c-832c-4afa-a9b9-211507664805-config-data" (OuterVolumeSpecName: "config-data") pod "8043f69c-832c-4afa-a9b9-211507664805" (UID: "8043f69c-832c-4afa-a9b9-211507664805"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.588821 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8043f69c-832c-4afa-a9b9-211507664805-server-conf" (OuterVolumeSpecName: "server-conf") pod "8043f69c-832c-4afa-a9b9-211507664805" (UID: "8043f69c-832c-4afa-a9b9-211507664805"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.601818 4730 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8043f69c-832c-4afa-a9b9-211507664805-pod-info\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.601861 4730 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8043f69c-832c-4afa-a9b9-211507664805-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.601880 4730 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8043f69c-832c-4afa-a9b9-211507664805-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.601891 4730 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8043f69c-832c-4afa-a9b9-211507664805-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.601902 4730 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8043f69c-832c-4afa-a9b9-211507664805-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.601913 4730 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8043f69c-832c-4afa-a9b9-211507664805-server-conf\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.601937 4730 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.601952 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vth2k\" (UniqueName: \"kubernetes.io/projected/8043f69c-832c-4afa-a9b9-211507664805-kube-api-access-vth2k\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.601963 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8043f69c-832c-4afa-a9b9-211507664805-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.634935 4730 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.681012 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8043f69c-832c-4afa-a9b9-211507664805-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8043f69c-832c-4afa-a9b9-211507664805" (UID: "8043f69c-832c-4afa-a9b9-211507664805"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.703503 4730 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8043f69c-832c-4afa-a9b9-211507664805-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.703537 4730 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.764072 4730 generic.go:334] "Generic (PLEG): container finished" podID="8043f69c-832c-4afa-a9b9-211507664805" containerID="f4ff5614730f4bee870729b9dbea193a82cb2fbf2b64a4650757a12a3469fc3b" exitCode=0 Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.764127 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.764140 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8043f69c-832c-4afa-a9b9-211507664805","Type":"ContainerDied","Data":"f4ff5614730f4bee870729b9dbea193a82cb2fbf2b64a4650757a12a3469fc3b"} Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.764199 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8043f69c-832c-4afa-a9b9-211507664805","Type":"ContainerDied","Data":"8412f9f53f9d92b76fea1c48228bf1b9d922d18161acc37ea8504ee75f4ce219"} Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.764231 4730 scope.go:117] "RemoveContainer" containerID="f4ff5614730f4bee870729b9dbea193a82cb2fbf2b64a4650757a12a3469fc3b" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.764155 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.785576 4730 scope.go:117] "RemoveContainer" containerID="5873082b81a5b9253ac47bf2bf3866502e40b3ccab836a111c3bd8134e015ee5" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.813502 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.827777 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.850004 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.854767 4730 scope.go:117] "RemoveContainer" containerID="f4ff5614730f4bee870729b9dbea193a82cb2fbf2b64a4650757a12a3469fc3b" Mar 20 16:03:49 crc kubenswrapper[4730]: E0320 16:03:49.855504 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4ff5614730f4bee870729b9dbea193a82cb2fbf2b64a4650757a12a3469fc3b\": container with ID starting with f4ff5614730f4bee870729b9dbea193a82cb2fbf2b64a4650757a12a3469fc3b not found: ID does not exist" containerID="f4ff5614730f4bee870729b9dbea193a82cb2fbf2b64a4650757a12a3469fc3b" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.855555 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4ff5614730f4bee870729b9dbea193a82cb2fbf2b64a4650757a12a3469fc3b"} err="failed to get container status \"f4ff5614730f4bee870729b9dbea193a82cb2fbf2b64a4650757a12a3469fc3b\": rpc error: code = NotFound desc = could not find container \"f4ff5614730f4bee870729b9dbea193a82cb2fbf2b64a4650757a12a3469fc3b\": container with ID starting with f4ff5614730f4bee870729b9dbea193a82cb2fbf2b64a4650757a12a3469fc3b not found: ID does not exist" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.855587 4730 scope.go:117] "RemoveContainer" containerID="5873082b81a5b9253ac47bf2bf3866502e40b3ccab836a111c3bd8134e015ee5" Mar 20 16:03:49 crc kubenswrapper[4730]: E0320 16:03:49.856163 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5873082b81a5b9253ac47bf2bf3866502e40b3ccab836a111c3bd8134e015ee5\": container with ID starting with 5873082b81a5b9253ac47bf2bf3866502e40b3ccab836a111c3bd8134e015ee5 not found: ID does not exist" containerID="5873082b81a5b9253ac47bf2bf3866502e40b3ccab836a111c3bd8134e015ee5" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.856188 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5873082b81a5b9253ac47bf2bf3866502e40b3ccab836a111c3bd8134e015ee5"} err="failed to get container status \"5873082b81a5b9253ac47bf2bf3866502e40b3ccab836a111c3bd8134e015ee5\": rpc error: code = NotFound desc = could not find container \"5873082b81a5b9253ac47bf2bf3866502e40b3ccab836a111c3bd8134e015ee5\": container with ID starting with 5873082b81a5b9253ac47bf2bf3866502e40b3ccab836a111c3bd8134e015ee5 not found: ID does not exist" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.860885 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.873197 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 16:03:49 crc kubenswrapper[4730]: E0320 16:03:49.873717 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfd9111c-a9f4-4874-91fc-c0ef68ae09a3" containerName="rabbitmq" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.873746 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfd9111c-a9f4-4874-91fc-c0ef68ae09a3" containerName="rabbitmq" Mar 20 16:03:49 crc kubenswrapper[4730]: E0320 16:03:49.873784 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfd9111c-a9f4-4874-91fc-c0ef68ae09a3" containerName="setup-container" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.873794 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfd9111c-a9f4-4874-91fc-c0ef68ae09a3" containerName="setup-container" Mar 20 16:03:49 crc kubenswrapper[4730]: E0320 16:03:49.873815 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8043f69c-832c-4afa-a9b9-211507664805" containerName="rabbitmq" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.873823 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8043f69c-832c-4afa-a9b9-211507664805" containerName="rabbitmq" Mar 20 16:03:49 crc kubenswrapper[4730]: E0320 16:03:49.873843 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8043f69c-832c-4afa-a9b9-211507664805" containerName="setup-container" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.873867 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8043f69c-832c-4afa-a9b9-211507664805" containerName="setup-container" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.874103 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfd9111c-a9f4-4874-91fc-c0ef68ae09a3" containerName="rabbitmq" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.874142 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8043f69c-832c-4afa-a9b9-211507664805" containerName="rabbitmq" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.875480 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.887868 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.887879 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.888037 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.888227 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.888453 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.888555 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.888649 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.888813 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-rwcvj" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.908651 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.917898 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.924341 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.937809 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.941997 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.942441 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dlhcb" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.942587 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.942636 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.942640 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 20 16:03:49 crc kubenswrapper[4730]: I0320 16:03:49.973474 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.010236 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/707f8f93-76f2-4472-a015-5dccae194c5e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.010351 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/707f8f93-76f2-4472-a015-5dccae194c5e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.010395 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpvv4\" (UniqueName: \"kubernetes.io/projected/707f8f93-76f2-4472-a015-5dccae194c5e-kube-api-access-qpvv4\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.010419 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/707f8f93-76f2-4472-a015-5dccae194c5e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.010442 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.010590 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.010650 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.010707 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.010758 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.010919 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.010956 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/707f8f93-76f2-4472-a015-5dccae194c5e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.011079 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.011116 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.011137 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt52r\" (UniqueName: \"kubernetes.io/projected/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-kube-api-access-kt52r\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.011159 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.011186 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/707f8f93-76f2-4472-a015-5dccae194c5e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.011204 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.011236 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/707f8f93-76f2-4472-a015-5dccae194c5e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.011309 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/707f8f93-76f2-4472-a015-5dccae194c5e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.011338 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/707f8f93-76f2-4472-a015-5dccae194c5e-config-data\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.011355 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/707f8f93-76f2-4472-a015-5dccae194c5e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.011377 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.113564 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.113611 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.113629 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt52r\" (UniqueName: \"kubernetes.io/projected/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-kube-api-access-kt52r\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.113649 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.113672 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/707f8f93-76f2-4472-a015-5dccae194c5e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.113690 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.113704 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/707f8f93-76f2-4472-a015-5dccae194c5e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.113724 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/707f8f93-76f2-4472-a015-5dccae194c5e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.114226 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/707f8f93-76f2-4472-a015-5dccae194c5e-config-data\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.114375 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/707f8f93-76f2-4472-a015-5dccae194c5e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.114604 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.114756 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/707f8f93-76f2-4472-a015-5dccae194c5e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.114846 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/707f8f93-76f2-4472-a015-5dccae194c5e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.114962 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpvv4\" (UniqueName: \"kubernetes.io/projected/707f8f93-76f2-4472-a015-5dccae194c5e-kube-api-access-qpvv4\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.115041 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/707f8f93-76f2-4472-a015-5dccae194c5e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.115115 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.115228 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.115415 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.115519 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.115610 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.115805 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.115907 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/707f8f93-76f2-4472-a015-5dccae194c5e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.116427 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.117298 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.117529 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.117564 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/707f8f93-76f2-4472-a015-5dccae194c5e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.117646 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.117662 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.117953 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.117967 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.119530 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/707f8f93-76f2-4472-a015-5dccae194c5e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.117535 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/707f8f93-76f2-4472-a015-5dccae194c5e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.117661 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.119951 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/707f8f93-76f2-4472-a015-5dccae194c5e-config-data\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.120158 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/707f8f93-76f2-4472-a015-5dccae194c5e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.120452 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.121966 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/707f8f93-76f2-4472-a015-5dccae194c5e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.123993 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.124480 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/707f8f93-76f2-4472-a015-5dccae194c5e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.125107 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/707f8f93-76f2-4472-a015-5dccae194c5e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.125882 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/707f8f93-76f2-4472-a015-5dccae194c5e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.125988 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.139811 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpvv4\" (UniqueName: \"kubernetes.io/projected/707f8f93-76f2-4472-a015-5dccae194c5e-kube-api-access-qpvv4\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.139983 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt52r\" (UniqueName: \"kubernetes.io/projected/b92f799a-be4e-45a1-9e2e-c93c4992c9ce-kube-api-access-kt52r\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.164970 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b92f799a-be4e-45a1-9e2e-c93c4992c9ce\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.176818 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"707f8f93-76f2-4472-a015-5dccae194c5e\") " pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.196715 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.246114 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.803395 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 16:03:50 crc kubenswrapper[4730]: I0320 16:03:50.908274 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 16:03:51 crc kubenswrapper[4730]: I0320 16:03:51.545325 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8043f69c-832c-4afa-a9b9-211507664805" path="/var/lib/kubelet/pods/8043f69c-832c-4afa-a9b9-211507664805/volumes" Mar 20 16:03:51 crc kubenswrapper[4730]: I0320 16:03:51.546482 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfd9111c-a9f4-4874-91fc-c0ef68ae09a3" path="/var/lib/kubelet/pods/dfd9111c-a9f4-4874-91fc-c0ef68ae09a3/volumes" Mar 20 16:03:51 crc kubenswrapper[4730]: I0320 16:03:51.785789 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"707f8f93-76f2-4472-a015-5dccae194c5e","Type":"ContainerStarted","Data":"53e55d748e88630ace3309f85c281512f29a48784cd77391059785f2314b3d4c"} Mar 20 16:03:51 crc kubenswrapper[4730]: I0320 16:03:51.787174 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b92f799a-be4e-45a1-9e2e-c93c4992c9ce","Type":"ContainerStarted","Data":"2d8da0f1c654d6ae5130aa10ef577327b9906a2dbf7fe48ced9b5c8e74d81bae"} Mar 20 16:03:52 crc kubenswrapper[4730]: I0320 16:03:52.812204 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"707f8f93-76f2-4472-a015-5dccae194c5e","Type":"ContainerStarted","Data":"de03597ab9cfb6693189c9786007baca422ddc54452cd14a6d93d946dbc0292f"} Mar 20 16:03:52 crc kubenswrapper[4730]: I0320 16:03:52.814419 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b92f799a-be4e-45a1-9e2e-c93c4992c9ce","Type":"ContainerStarted","Data":"1e29d00f6f29fa8f0367c00832e311cff753dbf51d6fb8d2e00ec1e4fe83f33b"} Mar 20 16:03:59 crc kubenswrapper[4730]: I0320 16:03:59.439667 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-559876945-l7ht2"] Mar 20 16:03:59 crc kubenswrapper[4730]: I0320 16:03:59.442950 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-559876945-l7ht2" Mar 20 16:03:59 crc kubenswrapper[4730]: I0320 16:03:59.445194 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Mar 20 16:03:59 crc kubenswrapper[4730]: I0320 16:03:59.491751 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-559876945-l7ht2"] Mar 20 16:03:59 crc kubenswrapper[4730]: I0320 16:03:59.607332 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-ovsdbserver-nb\") pod \"dnsmasq-dns-559876945-l7ht2\" (UID: \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\") " pod="openstack/dnsmasq-dns-559876945-l7ht2" Mar 20 16:03:59 crc kubenswrapper[4730]: I0320 16:03:59.607402 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-dns-svc\") pod \"dnsmasq-dns-559876945-l7ht2\" (UID: \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\") " pod="openstack/dnsmasq-dns-559876945-l7ht2" Mar 20 16:03:59 crc kubenswrapper[4730]: I0320 16:03:59.607456 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-openstack-edpm-ipam\") pod \"dnsmasq-dns-559876945-l7ht2\" (UID: \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\") " pod="openstack/dnsmasq-dns-559876945-l7ht2" Mar 20 16:03:59 crc kubenswrapper[4730]: I0320 16:03:59.607561 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-dns-swift-storage-0\") pod \"dnsmasq-dns-559876945-l7ht2\" (UID: \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\") " pod="openstack/dnsmasq-dns-559876945-l7ht2" Mar 20 16:03:59 crc kubenswrapper[4730]: I0320 16:03:59.607587 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-config\") pod \"dnsmasq-dns-559876945-l7ht2\" (UID: \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\") " pod="openstack/dnsmasq-dns-559876945-l7ht2" Mar 20 16:03:59 crc kubenswrapper[4730]: I0320 16:03:59.607622 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g8ft\" (UniqueName: \"kubernetes.io/projected/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-kube-api-access-6g8ft\") pod \"dnsmasq-dns-559876945-l7ht2\" (UID: \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\") " pod="openstack/dnsmasq-dns-559876945-l7ht2" Mar 20 16:03:59 crc kubenswrapper[4730]: I0320 16:03:59.607695 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-ovsdbserver-sb\") pod \"dnsmasq-dns-559876945-l7ht2\" (UID: \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\") " pod="openstack/dnsmasq-dns-559876945-l7ht2" Mar 20 16:03:59 crc kubenswrapper[4730]: I0320 16:03:59.710035 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-ovsdbserver-nb\") pod \"dnsmasq-dns-559876945-l7ht2\" (UID: \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\") " pod="openstack/dnsmasq-dns-559876945-l7ht2" Mar 20 16:03:59 crc kubenswrapper[4730]: I0320 16:03:59.710398 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-dns-svc\") pod \"dnsmasq-dns-559876945-l7ht2\" (UID: \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\") " pod="openstack/dnsmasq-dns-559876945-l7ht2" Mar 20 16:03:59 crc kubenswrapper[4730]: I0320 16:03:59.710531 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-openstack-edpm-ipam\") pod \"dnsmasq-dns-559876945-l7ht2\" (UID: \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\") " pod="openstack/dnsmasq-dns-559876945-l7ht2" Mar 20 16:03:59 crc kubenswrapper[4730]: I0320 16:03:59.710681 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-dns-swift-storage-0\") pod \"dnsmasq-dns-559876945-l7ht2\" (UID: \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\") " pod="openstack/dnsmasq-dns-559876945-l7ht2" Mar 20 16:03:59 crc kubenswrapper[4730]: I0320 16:03:59.710755 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-config\") pod \"dnsmasq-dns-559876945-l7ht2\" (UID: \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\") " pod="openstack/dnsmasq-dns-559876945-l7ht2" Mar 20 16:03:59 crc kubenswrapper[4730]: I0320 16:03:59.710838 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g8ft\" (UniqueName: \"kubernetes.io/projected/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-kube-api-access-6g8ft\") pod \"dnsmasq-dns-559876945-l7ht2\" (UID: \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\") " pod="openstack/dnsmasq-dns-559876945-l7ht2" Mar 20 16:03:59 crc kubenswrapper[4730]: I0320 16:03:59.710989 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-ovsdbserver-sb\") pod \"dnsmasq-dns-559876945-l7ht2\" (UID: \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\") " pod="openstack/dnsmasq-dns-559876945-l7ht2" Mar 20 16:03:59 crc kubenswrapper[4730]: I0320 16:03:59.711563 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-ovsdbserver-nb\") pod \"dnsmasq-dns-559876945-l7ht2\" (UID: \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\") " pod="openstack/dnsmasq-dns-559876945-l7ht2" Mar 20 16:03:59 crc kubenswrapper[4730]: I0320 16:03:59.711585 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-openstack-edpm-ipam\") pod \"dnsmasq-dns-559876945-l7ht2\" (UID: \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\") " pod="openstack/dnsmasq-dns-559876945-l7ht2" Mar 20 16:03:59 crc kubenswrapper[4730]: I0320 16:03:59.711601 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-dns-swift-storage-0\") pod \"dnsmasq-dns-559876945-l7ht2\" (UID: \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\") " pod="openstack/dnsmasq-dns-559876945-l7ht2" Mar 20 16:03:59 crc kubenswrapper[4730]: I0320 16:03:59.711691 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-dns-svc\") pod \"dnsmasq-dns-559876945-l7ht2\" (UID: \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\") " pod="openstack/dnsmasq-dns-559876945-l7ht2" Mar 20 16:03:59 crc kubenswrapper[4730]: I0320 16:03:59.712613 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-ovsdbserver-sb\") pod \"dnsmasq-dns-559876945-l7ht2\" (UID: \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\") " pod="openstack/dnsmasq-dns-559876945-l7ht2" Mar 20 16:03:59 crc kubenswrapper[4730]: I0320 16:03:59.713750 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-config\") pod \"dnsmasq-dns-559876945-l7ht2\" (UID: \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\") " pod="openstack/dnsmasq-dns-559876945-l7ht2" Mar 20 16:03:59 crc kubenswrapper[4730]: I0320 16:03:59.730741 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g8ft\" (UniqueName: \"kubernetes.io/projected/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-kube-api-access-6g8ft\") pod \"dnsmasq-dns-559876945-l7ht2\" (UID: \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\") " pod="openstack/dnsmasq-dns-559876945-l7ht2" Mar 20 16:03:59 crc kubenswrapper[4730]: I0320 16:03:59.801368 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-559876945-l7ht2" Mar 20 16:04:00 crc kubenswrapper[4730]: I0320 16:04:00.130851 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567044-nw9nk"] Mar 20 16:04:00 crc kubenswrapper[4730]: I0320 16:04:00.132929 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567044-nw9nk" Mar 20 16:04:00 crc kubenswrapper[4730]: I0320 16:04:00.139371 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:04:00 crc kubenswrapper[4730]: I0320 16:04:00.139746 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:04:00 crc kubenswrapper[4730]: I0320 16:04:00.139982 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:04:00 crc kubenswrapper[4730]: I0320 16:04:00.142051 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567044-nw9nk"] Mar 20 16:04:00 crc kubenswrapper[4730]: I0320 16:04:00.318563 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-559876945-l7ht2"] Mar 20 16:04:00 crc kubenswrapper[4730]: I0320 16:04:00.322489 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz7j8\" (UniqueName: \"kubernetes.io/projected/44fa3d60-826d-4b59-b44a-0102f155b586-kube-api-access-mz7j8\") pod \"auto-csr-approver-29567044-nw9nk\" (UID: \"44fa3d60-826d-4b59-b44a-0102f155b586\") " pod="openshift-infra/auto-csr-approver-29567044-nw9nk" Mar 20 16:04:00 crc kubenswrapper[4730]: I0320 16:04:00.424900 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz7j8\" (UniqueName: \"kubernetes.io/projected/44fa3d60-826d-4b59-b44a-0102f155b586-kube-api-access-mz7j8\") pod \"auto-csr-approver-29567044-nw9nk\" (UID: \"44fa3d60-826d-4b59-b44a-0102f155b586\") " pod="openshift-infra/auto-csr-approver-29567044-nw9nk" Mar 20 16:04:00 crc kubenswrapper[4730]: I0320 16:04:00.446482 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz7j8\" (UniqueName: \"kubernetes.io/projected/44fa3d60-826d-4b59-b44a-0102f155b586-kube-api-access-mz7j8\") pod \"auto-csr-approver-29567044-nw9nk\" (UID: \"44fa3d60-826d-4b59-b44a-0102f155b586\") " pod="openshift-infra/auto-csr-approver-29567044-nw9nk" Mar 20 16:04:00 crc kubenswrapper[4730]: I0320 16:04:00.467075 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567044-nw9nk" Mar 20 16:04:00 crc kubenswrapper[4730]: I0320 16:04:00.891176 4730 generic.go:334] "Generic (PLEG): container finished" podID="99a6ce04-c06b-4fb6-84d6-a836cc82d87a" containerID="9d119601c72d61103c3dc16a3891cc72c3cd26d5d7d837176227b7a8dffe1c9a" exitCode=0 Mar 20 16:04:00 crc kubenswrapper[4730]: I0320 16:04:00.891219 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-559876945-l7ht2" event={"ID":"99a6ce04-c06b-4fb6-84d6-a836cc82d87a","Type":"ContainerDied","Data":"9d119601c72d61103c3dc16a3891cc72c3cd26d5d7d837176227b7a8dffe1c9a"} Mar 20 16:04:00 crc kubenswrapper[4730]: I0320 16:04:00.891782 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-559876945-l7ht2" event={"ID":"99a6ce04-c06b-4fb6-84d6-a836cc82d87a","Type":"ContainerStarted","Data":"81e53bbf904ba77b9ef44d62d3e4f4d28f5a6cda01dd315128148262131a319a"} Mar 20 16:04:00 crc kubenswrapper[4730]: I0320 16:04:00.954200 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567044-nw9nk"] Mar 20 16:04:00 crc kubenswrapper[4730]: W0320 16:04:00.955421 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44fa3d60_826d_4b59_b44a_0102f155b586.slice/crio-6391bff5eb60c16800aba1e6aa157311c040dbb581d96962fb5337dd149a2c31 WatchSource:0}: Error finding container 6391bff5eb60c16800aba1e6aa157311c040dbb581d96962fb5337dd149a2c31: Status 404 returned error can't find the container with id 6391bff5eb60c16800aba1e6aa157311c040dbb581d96962fb5337dd149a2c31 Mar 20 16:04:00 crc kubenswrapper[4730]: I0320 16:04:00.957568 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 16:04:01 crc kubenswrapper[4730]: I0320 16:04:01.915625 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-559876945-l7ht2" event={"ID":"99a6ce04-c06b-4fb6-84d6-a836cc82d87a","Type":"ContainerStarted","Data":"1cf3b8c0059e1e31ef949cf289521c33a2ceeb42e33377927e55f4fd8a97b5b3"} Mar 20 16:04:01 crc kubenswrapper[4730]: I0320 16:04:01.916325 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-559876945-l7ht2" Mar 20 16:04:01 crc kubenswrapper[4730]: I0320 16:04:01.916895 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567044-nw9nk" event={"ID":"44fa3d60-826d-4b59-b44a-0102f155b586","Type":"ContainerStarted","Data":"6391bff5eb60c16800aba1e6aa157311c040dbb581d96962fb5337dd149a2c31"} Mar 20 16:04:01 crc kubenswrapper[4730]: I0320 16:04:01.950435 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-559876945-l7ht2" podStartSLOduration=2.950415629 podStartE2EDuration="2.950415629s" podCreationTimestamp="2026-03-20 16:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:04:01.940786913 +0000 UTC m=+1501.154158292" watchObservedRunningTime="2026-03-20 16:04:01.950415629 +0000 UTC m=+1501.163786998" Mar 20 16:04:02 crc kubenswrapper[4730]: I0320 16:04:02.928042 4730 generic.go:334] "Generic (PLEG): container finished" podID="44fa3d60-826d-4b59-b44a-0102f155b586" containerID="3ad79a5f57b1a4c7b377fb15d13f7708e0e00b53bbc48929b06820bc137a571e" exitCode=0 Mar 20 16:04:02 crc kubenswrapper[4730]: I0320 16:04:02.928292 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567044-nw9nk" event={"ID":"44fa3d60-826d-4b59-b44a-0102f155b586","Type":"ContainerDied","Data":"3ad79a5f57b1a4c7b377fb15d13f7708e0e00b53bbc48929b06820bc137a571e"} Mar 20 16:04:04 crc kubenswrapper[4730]: I0320 16:04:04.298443 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567044-nw9nk" Mar 20 16:04:04 crc kubenswrapper[4730]: I0320 16:04:04.413083 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz7j8\" (UniqueName: \"kubernetes.io/projected/44fa3d60-826d-4b59-b44a-0102f155b586-kube-api-access-mz7j8\") pod \"44fa3d60-826d-4b59-b44a-0102f155b586\" (UID: \"44fa3d60-826d-4b59-b44a-0102f155b586\") " Mar 20 16:04:04 crc kubenswrapper[4730]: I0320 16:04:04.420083 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44fa3d60-826d-4b59-b44a-0102f155b586-kube-api-access-mz7j8" (OuterVolumeSpecName: "kube-api-access-mz7j8") pod "44fa3d60-826d-4b59-b44a-0102f155b586" (UID: "44fa3d60-826d-4b59-b44a-0102f155b586"). InnerVolumeSpecName "kube-api-access-mz7j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:04:04 crc kubenswrapper[4730]: I0320 16:04:04.515526 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz7j8\" (UniqueName: \"kubernetes.io/projected/44fa3d60-826d-4b59-b44a-0102f155b586-kube-api-access-mz7j8\") on node \"crc\" DevicePath \"\"" Mar 20 16:04:04 crc kubenswrapper[4730]: I0320 16:04:04.989652 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567044-nw9nk" event={"ID":"44fa3d60-826d-4b59-b44a-0102f155b586","Type":"ContainerDied","Data":"6391bff5eb60c16800aba1e6aa157311c040dbb581d96962fb5337dd149a2c31"} Mar 20 16:04:04 crc kubenswrapper[4730]: I0320 16:04:04.989730 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6391bff5eb60c16800aba1e6aa157311c040dbb581d96962fb5337dd149a2c31" Mar 20 16:04:04 crc kubenswrapper[4730]: I0320 16:04:04.989791 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567044-nw9nk" Mar 20 16:04:05 crc kubenswrapper[4730]: I0320 16:04:05.389999 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567038-qvgqb"] Mar 20 16:04:05 crc kubenswrapper[4730]: I0320 16:04:05.397927 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567038-qvgqb"] Mar 20 16:04:05 crc kubenswrapper[4730]: I0320 16:04:05.546813 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67854402-4e0e-4ebe-b9d4-700669827780" path="/var/lib/kubelet/pods/67854402-4e0e-4ebe-b9d4-700669827780/volumes" Mar 20 16:04:09 crc kubenswrapper[4730]: I0320 16:04:09.803192 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-559876945-l7ht2" Mar 20 16:04:09 crc kubenswrapper[4730]: I0320 16:04:09.907564 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f99bcbd6f-lkgpt"] Mar 20 16:04:09 crc kubenswrapper[4730]: I0320 16:04:09.907978 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" podUID="eee8e670-a743-4284-bde0-5a8a77d8058e" containerName="dnsmasq-dns" containerID="cri-o://b9b68041f5e6d75af1b6ac50a20d8efa0a3cf0ab36fd4d2c2cf619950c911910" gracePeriod=10 Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.070523 4730 generic.go:334] "Generic (PLEG): container finished" podID="eee8e670-a743-4284-bde0-5a8a77d8058e" containerID="b9b68041f5e6d75af1b6ac50a20d8efa0a3cf0ab36fd4d2c2cf619950c911910" exitCode=0 Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.070804 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" event={"ID":"eee8e670-a743-4284-bde0-5a8a77d8058e","Type":"ContainerDied","Data":"b9b68041f5e6d75af1b6ac50a20d8efa0a3cf0ab36fd4d2c2cf619950c911910"} Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.103460 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9449c877-vxfrw"] Mar 20 16:04:10 crc kubenswrapper[4730]: E0320 16:04:10.104170 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44fa3d60-826d-4b59-b44a-0102f155b586" containerName="oc" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.104195 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="44fa3d60-826d-4b59-b44a-0102f155b586" containerName="oc" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.104493 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="44fa3d60-826d-4b59-b44a-0102f155b586" containerName="oc" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.105634 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9449c877-vxfrw" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.126033 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9449c877-vxfrw"] Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.249043 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4139a04b-4804-475f-9da3-6c40dad56690-dns-svc\") pod \"dnsmasq-dns-9449c877-vxfrw\" (UID: \"4139a04b-4804-475f-9da3-6c40dad56690\") " pod="openstack/dnsmasq-dns-9449c877-vxfrw" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.249090 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgsp4\" (UniqueName: \"kubernetes.io/projected/4139a04b-4804-475f-9da3-6c40dad56690-kube-api-access-dgsp4\") pod \"dnsmasq-dns-9449c877-vxfrw\" (UID: \"4139a04b-4804-475f-9da3-6c40dad56690\") " pod="openstack/dnsmasq-dns-9449c877-vxfrw" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.249109 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4139a04b-4804-475f-9da3-6c40dad56690-config\") pod \"dnsmasq-dns-9449c877-vxfrw\" (UID: \"4139a04b-4804-475f-9da3-6c40dad56690\") " pod="openstack/dnsmasq-dns-9449c877-vxfrw" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.249127 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4139a04b-4804-475f-9da3-6c40dad56690-ovsdbserver-sb\") pod \"dnsmasq-dns-9449c877-vxfrw\" (UID: \"4139a04b-4804-475f-9da3-6c40dad56690\") " pod="openstack/dnsmasq-dns-9449c877-vxfrw" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.249183 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4139a04b-4804-475f-9da3-6c40dad56690-openstack-edpm-ipam\") pod \"dnsmasq-dns-9449c877-vxfrw\" (UID: \"4139a04b-4804-475f-9da3-6c40dad56690\") " pod="openstack/dnsmasq-dns-9449c877-vxfrw" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.249415 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4139a04b-4804-475f-9da3-6c40dad56690-dns-swift-storage-0\") pod \"dnsmasq-dns-9449c877-vxfrw\" (UID: \"4139a04b-4804-475f-9da3-6c40dad56690\") " pod="openstack/dnsmasq-dns-9449c877-vxfrw" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.249484 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4139a04b-4804-475f-9da3-6c40dad56690-ovsdbserver-nb\") pod \"dnsmasq-dns-9449c877-vxfrw\" (UID: \"4139a04b-4804-475f-9da3-6c40dad56690\") " pod="openstack/dnsmasq-dns-9449c877-vxfrw" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.351064 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4139a04b-4804-475f-9da3-6c40dad56690-openstack-edpm-ipam\") pod \"dnsmasq-dns-9449c877-vxfrw\" (UID: \"4139a04b-4804-475f-9da3-6c40dad56690\") " pod="openstack/dnsmasq-dns-9449c877-vxfrw" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.351150 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4139a04b-4804-475f-9da3-6c40dad56690-dns-swift-storage-0\") pod \"dnsmasq-dns-9449c877-vxfrw\" (UID: \"4139a04b-4804-475f-9da3-6c40dad56690\") " pod="openstack/dnsmasq-dns-9449c877-vxfrw" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.351177 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4139a04b-4804-475f-9da3-6c40dad56690-ovsdbserver-nb\") pod \"dnsmasq-dns-9449c877-vxfrw\" (UID: \"4139a04b-4804-475f-9da3-6c40dad56690\") " pod="openstack/dnsmasq-dns-9449c877-vxfrw" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.351287 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4139a04b-4804-475f-9da3-6c40dad56690-dns-svc\") pod \"dnsmasq-dns-9449c877-vxfrw\" (UID: \"4139a04b-4804-475f-9da3-6c40dad56690\") " pod="openstack/dnsmasq-dns-9449c877-vxfrw" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.351324 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgsp4\" (UniqueName: \"kubernetes.io/projected/4139a04b-4804-475f-9da3-6c40dad56690-kube-api-access-dgsp4\") pod \"dnsmasq-dns-9449c877-vxfrw\" (UID: \"4139a04b-4804-475f-9da3-6c40dad56690\") " pod="openstack/dnsmasq-dns-9449c877-vxfrw" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.351349 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4139a04b-4804-475f-9da3-6c40dad56690-config\") pod \"dnsmasq-dns-9449c877-vxfrw\" (UID: \"4139a04b-4804-475f-9da3-6c40dad56690\") " pod="openstack/dnsmasq-dns-9449c877-vxfrw" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.351374 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4139a04b-4804-475f-9da3-6c40dad56690-ovsdbserver-sb\") pod \"dnsmasq-dns-9449c877-vxfrw\" (UID: \"4139a04b-4804-475f-9da3-6c40dad56690\") " pod="openstack/dnsmasq-dns-9449c877-vxfrw" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.352241 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4139a04b-4804-475f-9da3-6c40dad56690-openstack-edpm-ipam\") pod \"dnsmasq-dns-9449c877-vxfrw\" (UID: \"4139a04b-4804-475f-9da3-6c40dad56690\") " pod="openstack/dnsmasq-dns-9449c877-vxfrw" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.352307 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4139a04b-4804-475f-9da3-6c40dad56690-ovsdbserver-nb\") pod \"dnsmasq-dns-9449c877-vxfrw\" (UID: \"4139a04b-4804-475f-9da3-6c40dad56690\") " pod="openstack/dnsmasq-dns-9449c877-vxfrw" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.352364 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4139a04b-4804-475f-9da3-6c40dad56690-ovsdbserver-sb\") pod \"dnsmasq-dns-9449c877-vxfrw\" (UID: \"4139a04b-4804-475f-9da3-6c40dad56690\") " pod="openstack/dnsmasq-dns-9449c877-vxfrw" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.352561 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4139a04b-4804-475f-9da3-6c40dad56690-config\") pod \"dnsmasq-dns-9449c877-vxfrw\" (UID: \"4139a04b-4804-475f-9da3-6c40dad56690\") " pod="openstack/dnsmasq-dns-9449c877-vxfrw" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.353224 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4139a04b-4804-475f-9da3-6c40dad56690-dns-swift-storage-0\") pod \"dnsmasq-dns-9449c877-vxfrw\" (UID: \"4139a04b-4804-475f-9da3-6c40dad56690\") " pod="openstack/dnsmasq-dns-9449c877-vxfrw" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.353313 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4139a04b-4804-475f-9da3-6c40dad56690-dns-svc\") pod \"dnsmasq-dns-9449c877-vxfrw\" (UID: \"4139a04b-4804-475f-9da3-6c40dad56690\") " pod="openstack/dnsmasq-dns-9449c877-vxfrw" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.369871 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgsp4\" (UniqueName: \"kubernetes.io/projected/4139a04b-4804-475f-9da3-6c40dad56690-kube-api-access-dgsp4\") pod \"dnsmasq-dns-9449c877-vxfrw\" (UID: \"4139a04b-4804-475f-9da3-6c40dad56690\") " pod="openstack/dnsmasq-dns-9449c877-vxfrw" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.434454 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9449c877-vxfrw" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.459753 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.564745 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmzv5\" (UniqueName: \"kubernetes.io/projected/eee8e670-a743-4284-bde0-5a8a77d8058e-kube-api-access-jmzv5\") pod \"eee8e670-a743-4284-bde0-5a8a77d8058e\" (UID: \"eee8e670-a743-4284-bde0-5a8a77d8058e\") " Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.564828 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-config\") pod \"eee8e670-a743-4284-bde0-5a8a77d8058e\" (UID: \"eee8e670-a743-4284-bde0-5a8a77d8058e\") " Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.564878 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-dns-svc\") pod \"eee8e670-a743-4284-bde0-5a8a77d8058e\" (UID: \"eee8e670-a743-4284-bde0-5a8a77d8058e\") " Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.564998 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-ovsdbserver-nb\") pod \"eee8e670-a743-4284-bde0-5a8a77d8058e\" (UID: \"eee8e670-a743-4284-bde0-5a8a77d8058e\") " Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.565138 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-ovsdbserver-sb\") pod \"eee8e670-a743-4284-bde0-5a8a77d8058e\" (UID: \"eee8e670-a743-4284-bde0-5a8a77d8058e\") " Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.565238 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-dns-swift-storage-0\") pod \"eee8e670-a743-4284-bde0-5a8a77d8058e\" (UID: \"eee8e670-a743-4284-bde0-5a8a77d8058e\") " Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.591089 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eee8e670-a743-4284-bde0-5a8a77d8058e-kube-api-access-jmzv5" (OuterVolumeSpecName: "kube-api-access-jmzv5") pod "eee8e670-a743-4284-bde0-5a8a77d8058e" (UID: "eee8e670-a743-4284-bde0-5a8a77d8058e"). InnerVolumeSpecName "kube-api-access-jmzv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.650860 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-config" (OuterVolumeSpecName: "config") pod "eee8e670-a743-4284-bde0-5a8a77d8058e" (UID: "eee8e670-a743-4284-bde0-5a8a77d8058e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.664018 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eee8e670-a743-4284-bde0-5a8a77d8058e" (UID: "eee8e670-a743-4284-bde0-5a8a77d8058e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.669121 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eee8e670-a743-4284-bde0-5a8a77d8058e" (UID: "eee8e670-a743-4284-bde0-5a8a77d8058e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.674214 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eee8e670-a743-4284-bde0-5a8a77d8058e" (UID: "eee8e670-a743-4284-bde0-5a8a77d8058e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.680130 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.680241 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.680317 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmzv5\" (UniqueName: \"kubernetes.io/projected/eee8e670-a743-4284-bde0-5a8a77d8058e-kube-api-access-jmzv5\") on node \"crc\" DevicePath \"\"" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.680374 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-config\") on node \"crc\" DevicePath \"\"" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.680431 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.689396 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "eee8e670-a743-4284-bde0-5a8a77d8058e" (UID: "eee8e670-a743-4284-bde0-5a8a77d8058e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:04:10 crc kubenswrapper[4730]: I0320 16:04:10.783061 4730 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eee8e670-a743-4284-bde0-5a8a77d8058e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 20 16:04:11 crc kubenswrapper[4730]: W0320 16:04:11.061983 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4139a04b_4804_475f_9da3_6c40dad56690.slice/crio-7bcd0ba2a7c520ed08c0538dc5a73015e36d26a4b3b4bc92dbbb7058fce4f6a5 WatchSource:0}: Error finding container 7bcd0ba2a7c520ed08c0538dc5a73015e36d26a4b3b4bc92dbbb7058fce4f6a5: Status 404 returned error can't find the container with id 7bcd0ba2a7c520ed08c0538dc5a73015e36d26a4b3b4bc92dbbb7058fce4f6a5 Mar 20 16:04:11 crc kubenswrapper[4730]: I0320 16:04:11.066053 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9449c877-vxfrw"] Mar 20 16:04:11 crc kubenswrapper[4730]: I0320 16:04:11.085506 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" event={"ID":"eee8e670-a743-4284-bde0-5a8a77d8058e","Type":"ContainerDied","Data":"0440fe1ccefaeb24724ee16cfedb746eead11f6c1b7d84e1056a70f539a4c85b"} Mar 20 16:04:11 crc kubenswrapper[4730]: I0320 16:04:11.085564 4730 scope.go:117] "RemoveContainer" containerID="b9b68041f5e6d75af1b6ac50a20d8efa0a3cf0ab36fd4d2c2cf619950c911910" Mar 20 16:04:11 crc kubenswrapper[4730]: I0320 16:04:11.085578 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f99bcbd6f-lkgpt" Mar 20 16:04:11 crc kubenswrapper[4730]: I0320 16:04:11.087176 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9449c877-vxfrw" event={"ID":"4139a04b-4804-475f-9da3-6c40dad56690","Type":"ContainerStarted","Data":"7bcd0ba2a7c520ed08c0538dc5a73015e36d26a4b3b4bc92dbbb7058fce4f6a5"} Mar 20 16:04:11 crc kubenswrapper[4730]: I0320 16:04:11.247087 4730 scope.go:117] "RemoveContainer" containerID="eb503bcef68144de38c1262fa118798748090de6157da609f58bcca7e2bbdbcc" Mar 20 16:04:11 crc kubenswrapper[4730]: I0320 16:04:11.278608 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f99bcbd6f-lkgpt"] Mar 20 16:04:11 crc kubenswrapper[4730]: I0320 16:04:11.291385 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f99bcbd6f-lkgpt"] Mar 20 16:04:11 crc kubenswrapper[4730]: I0320 16:04:11.546048 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eee8e670-a743-4284-bde0-5a8a77d8058e" path="/var/lib/kubelet/pods/eee8e670-a743-4284-bde0-5a8a77d8058e/volumes" Mar 20 16:04:12 crc kubenswrapper[4730]: I0320 16:04:12.099547 4730 generic.go:334] "Generic (PLEG): container finished" podID="4139a04b-4804-475f-9da3-6c40dad56690" containerID="65ca3621b3f61bde9805bb83282e2da0767cc223325f4fd92eaef5aa91036539" exitCode=0 Mar 20 16:04:12 crc kubenswrapper[4730]: I0320 16:04:12.099590 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9449c877-vxfrw" event={"ID":"4139a04b-4804-475f-9da3-6c40dad56690","Type":"ContainerDied","Data":"65ca3621b3f61bde9805bb83282e2da0767cc223325f4fd92eaef5aa91036539"} Mar 20 16:04:12 crc kubenswrapper[4730]: I0320 16:04:12.880571 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:04:12 crc kubenswrapper[4730]: I0320 16:04:12.880620 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:04:13 crc kubenswrapper[4730]: I0320 16:04:13.111426 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9449c877-vxfrw" event={"ID":"4139a04b-4804-475f-9da3-6c40dad56690","Type":"ContainerStarted","Data":"1ffb68f266e414b3bc77b5326ad16b0f15aca2fb7b78309a33ced5f4eef7f1db"} Mar 20 16:04:13 crc kubenswrapper[4730]: I0320 16:04:13.111581 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9449c877-vxfrw" Mar 20 16:04:13 crc kubenswrapper[4730]: I0320 16:04:13.136048 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9449c877-vxfrw" podStartSLOduration=3.136029081 podStartE2EDuration="3.136029081s" podCreationTimestamp="2026-03-20 16:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:04:13.128760133 +0000 UTC m=+1512.342131522" watchObservedRunningTime="2026-03-20 16:04:13.136029081 +0000 UTC m=+1512.349400460" Mar 20 16:04:20 crc kubenswrapper[4730]: I0320 16:04:20.436140 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9449c877-vxfrw" Mar 20 16:04:20 crc kubenswrapper[4730]: I0320 16:04:20.540492 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-559876945-l7ht2"] Mar 20 16:04:20 crc kubenswrapper[4730]: I0320 16:04:20.540756 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-559876945-l7ht2" podUID="99a6ce04-c06b-4fb6-84d6-a836cc82d87a" containerName="dnsmasq-dns" containerID="cri-o://1cf3b8c0059e1e31ef949cf289521c33a2ceeb42e33377927e55f4fd8a97b5b3" gracePeriod=10 Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.034287 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-559876945-l7ht2" Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.124595 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-ovsdbserver-sb\") pod \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\" (UID: \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\") " Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.124650 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g8ft\" (UniqueName: \"kubernetes.io/projected/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-kube-api-access-6g8ft\") pod \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\" (UID: \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\") " Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.124684 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-dns-swift-storage-0\") pod \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\" (UID: \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\") " Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.124724 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-ovsdbserver-nb\") pod \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\" (UID: \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\") " Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.124764 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-config\") pod \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\" (UID: \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\") " Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.124785 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-dns-svc\") pod \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\" (UID: \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\") " Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.124807 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-openstack-edpm-ipam\") pod \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\" (UID: \"99a6ce04-c06b-4fb6-84d6-a836cc82d87a\") " Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.151140 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-kube-api-access-6g8ft" (OuterVolumeSpecName: "kube-api-access-6g8ft") pod "99a6ce04-c06b-4fb6-84d6-a836cc82d87a" (UID: "99a6ce04-c06b-4fb6-84d6-a836cc82d87a"). InnerVolumeSpecName "kube-api-access-6g8ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.181542 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "99a6ce04-c06b-4fb6-84d6-a836cc82d87a" (UID: "99a6ce04-c06b-4fb6-84d6-a836cc82d87a"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.189124 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "99a6ce04-c06b-4fb6-84d6-a836cc82d87a" (UID: "99a6ce04-c06b-4fb6-84d6-a836cc82d87a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.193544 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "99a6ce04-c06b-4fb6-84d6-a836cc82d87a" (UID: "99a6ce04-c06b-4fb6-84d6-a836cc82d87a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.193702 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "99a6ce04-c06b-4fb6-84d6-a836cc82d87a" (UID: "99a6ce04-c06b-4fb6-84d6-a836cc82d87a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.204725 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-config" (OuterVolumeSpecName: "config") pod "99a6ce04-c06b-4fb6-84d6-a836cc82d87a" (UID: "99a6ce04-c06b-4fb6-84d6-a836cc82d87a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.212591 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "99a6ce04-c06b-4fb6-84d6-a836cc82d87a" (UID: "99a6ce04-c06b-4fb6-84d6-a836cc82d87a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.218802 4730 generic.go:334] "Generic (PLEG): container finished" podID="99a6ce04-c06b-4fb6-84d6-a836cc82d87a" containerID="1cf3b8c0059e1e31ef949cf289521c33a2ceeb42e33377927e55f4fd8a97b5b3" exitCode=0 Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.218846 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-559876945-l7ht2" event={"ID":"99a6ce04-c06b-4fb6-84d6-a836cc82d87a","Type":"ContainerDied","Data":"1cf3b8c0059e1e31ef949cf289521c33a2ceeb42e33377927e55f4fd8a97b5b3"} Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.218873 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-559876945-l7ht2" event={"ID":"99a6ce04-c06b-4fb6-84d6-a836cc82d87a","Type":"ContainerDied","Data":"81e53bbf904ba77b9ef44d62d3e4f4d28f5a6cda01dd315128148262131a319a"} Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.218890 4730 scope.go:117] "RemoveContainer" containerID="1cf3b8c0059e1e31ef949cf289521c33a2ceeb42e33377927e55f4fd8a97b5b3" Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.219034 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-559876945-l7ht2" Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.227429 4730 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.228421 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.228446 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g8ft\" (UniqueName: \"kubernetes.io/projected/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-kube-api-access-6g8ft\") on node \"crc\" DevicePath \"\"" Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.228460 4730 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.228470 4730 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.228480 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-config\") on node \"crc\" DevicePath \"\"" Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.228490 4730 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99a6ce04-c06b-4fb6-84d6-a836cc82d87a-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.266648 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-559876945-l7ht2"] Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.270313 4730 scope.go:117] "RemoveContainer" containerID="9d119601c72d61103c3dc16a3891cc72c3cd26d5d7d837176227b7a8dffe1c9a" Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.277703 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-559876945-l7ht2"] Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.287931 4730 scope.go:117] "RemoveContainer" containerID="1cf3b8c0059e1e31ef949cf289521c33a2ceeb42e33377927e55f4fd8a97b5b3" Mar 20 16:04:21 crc kubenswrapper[4730]: E0320 16:04:21.288405 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cf3b8c0059e1e31ef949cf289521c33a2ceeb42e33377927e55f4fd8a97b5b3\": container with ID starting with 1cf3b8c0059e1e31ef949cf289521c33a2ceeb42e33377927e55f4fd8a97b5b3 not found: ID does not exist" containerID="1cf3b8c0059e1e31ef949cf289521c33a2ceeb42e33377927e55f4fd8a97b5b3" Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.288447 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cf3b8c0059e1e31ef949cf289521c33a2ceeb42e33377927e55f4fd8a97b5b3"} err="failed to get container status \"1cf3b8c0059e1e31ef949cf289521c33a2ceeb42e33377927e55f4fd8a97b5b3\": rpc error: code = NotFound desc = could not find container \"1cf3b8c0059e1e31ef949cf289521c33a2ceeb42e33377927e55f4fd8a97b5b3\": container with ID starting with 1cf3b8c0059e1e31ef949cf289521c33a2ceeb42e33377927e55f4fd8a97b5b3 not found: ID does not exist" Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.288475 4730 scope.go:117] "RemoveContainer" containerID="9d119601c72d61103c3dc16a3891cc72c3cd26d5d7d837176227b7a8dffe1c9a" Mar 20 16:04:21 crc kubenswrapper[4730]: E0320 16:04:21.288916 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d119601c72d61103c3dc16a3891cc72c3cd26d5d7d837176227b7a8dffe1c9a\": container with ID starting with 9d119601c72d61103c3dc16a3891cc72c3cd26d5d7d837176227b7a8dffe1c9a not found: ID does not exist" containerID="9d119601c72d61103c3dc16a3891cc72c3cd26d5d7d837176227b7a8dffe1c9a" Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.288957 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d119601c72d61103c3dc16a3891cc72c3cd26d5d7d837176227b7a8dffe1c9a"} err="failed to get container status \"9d119601c72d61103c3dc16a3891cc72c3cd26d5d7d837176227b7a8dffe1c9a\": rpc error: code = NotFound desc = could not find container \"9d119601c72d61103c3dc16a3891cc72c3cd26d5d7d837176227b7a8dffe1c9a\": container with ID starting with 9d119601c72d61103c3dc16a3891cc72c3cd26d5d7d837176227b7a8dffe1c9a not found: ID does not exist" Mar 20 16:04:21 crc kubenswrapper[4730]: I0320 16:04:21.547659 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99a6ce04-c06b-4fb6-84d6-a836cc82d87a" path="/var/lib/kubelet/pods/99a6ce04-c06b-4fb6-84d6-a836cc82d87a/volumes" Mar 20 16:04:25 crc kubenswrapper[4730]: I0320 16:04:25.268637 4730 generic.go:334] "Generic (PLEG): container finished" podID="707f8f93-76f2-4472-a015-5dccae194c5e" containerID="de03597ab9cfb6693189c9786007baca422ddc54452cd14a6d93d946dbc0292f" exitCode=0 Mar 20 16:04:25 crc kubenswrapper[4730]: I0320 16:04:25.268743 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"707f8f93-76f2-4472-a015-5dccae194c5e","Type":"ContainerDied","Data":"de03597ab9cfb6693189c9786007baca422ddc54452cd14a6d93d946dbc0292f"} Mar 20 16:04:25 crc kubenswrapper[4730]: I0320 16:04:25.272872 4730 generic.go:334] "Generic (PLEG): container finished" podID="b92f799a-be4e-45a1-9e2e-c93c4992c9ce" containerID="1e29d00f6f29fa8f0367c00832e311cff753dbf51d6fb8d2e00ec1e4fe83f33b" exitCode=0 Mar 20 16:04:25 crc kubenswrapper[4730]: I0320 16:04:25.272899 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b92f799a-be4e-45a1-9e2e-c93c4992c9ce","Type":"ContainerDied","Data":"1e29d00f6f29fa8f0367c00832e311cff753dbf51d6fb8d2e00ec1e4fe83f33b"} Mar 20 16:04:26 crc kubenswrapper[4730]: I0320 16:04:26.282106 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"707f8f93-76f2-4472-a015-5dccae194c5e","Type":"ContainerStarted","Data":"3586ba883914e6057094df55473b2fa2d64372ac4b8f14b7b1a92b955e87ee1f"} Mar 20 16:04:26 crc kubenswrapper[4730]: I0320 16:04:26.283312 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 20 16:04:26 crc kubenswrapper[4730]: I0320 16:04:26.283909 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b92f799a-be4e-45a1-9e2e-c93c4992c9ce","Type":"ContainerStarted","Data":"de3d681419b61f50ed45583f461818df98687096ac5479acda76b8915295f730"} Mar 20 16:04:26 crc kubenswrapper[4730]: I0320 16:04:26.284282 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:04:26 crc kubenswrapper[4730]: I0320 16:04:26.310422 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.310405316 podStartE2EDuration="37.310405316s" podCreationTimestamp="2026-03-20 16:03:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:04:26.309467029 +0000 UTC m=+1525.522838398" watchObservedRunningTime="2026-03-20 16:04:26.310405316 +0000 UTC m=+1525.523776685" Mar 20 16:04:26 crc kubenswrapper[4730]: I0320 16:04:26.345124 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.345103218 podStartE2EDuration="37.345103218s" podCreationTimestamp="2026-03-20 16:03:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:04:26.33956293 +0000 UTC m=+1525.552934299" watchObservedRunningTime="2026-03-20 16:04:26.345103218 +0000 UTC m=+1525.558474587" Mar 20 16:04:30 crc kubenswrapper[4730]: I0320 16:04:30.956496 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xhvz9"] Mar 20 16:04:30 crc kubenswrapper[4730]: E0320 16:04:30.957595 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eee8e670-a743-4284-bde0-5a8a77d8058e" containerName="init" Mar 20 16:04:30 crc kubenswrapper[4730]: I0320 16:04:30.957614 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="eee8e670-a743-4284-bde0-5a8a77d8058e" containerName="init" Mar 20 16:04:30 crc kubenswrapper[4730]: E0320 16:04:30.957639 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99a6ce04-c06b-4fb6-84d6-a836cc82d87a" containerName="dnsmasq-dns" Mar 20 16:04:30 crc kubenswrapper[4730]: I0320 16:04:30.957646 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="99a6ce04-c06b-4fb6-84d6-a836cc82d87a" containerName="dnsmasq-dns" Mar 20 16:04:30 crc kubenswrapper[4730]: E0320 16:04:30.957690 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eee8e670-a743-4284-bde0-5a8a77d8058e" containerName="dnsmasq-dns" Mar 20 16:04:30 crc kubenswrapper[4730]: I0320 16:04:30.957698 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="eee8e670-a743-4284-bde0-5a8a77d8058e" containerName="dnsmasq-dns" Mar 20 16:04:30 crc kubenswrapper[4730]: E0320 16:04:30.957712 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99a6ce04-c06b-4fb6-84d6-a836cc82d87a" containerName="init" Mar 20 16:04:30 crc kubenswrapper[4730]: I0320 16:04:30.957719 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="99a6ce04-c06b-4fb6-84d6-a836cc82d87a" containerName="init" Mar 20 16:04:30 crc kubenswrapper[4730]: I0320 16:04:30.957957 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="99a6ce04-c06b-4fb6-84d6-a836cc82d87a" containerName="dnsmasq-dns" Mar 20 16:04:30 crc kubenswrapper[4730]: I0320 16:04:30.957995 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="eee8e670-a743-4284-bde0-5a8a77d8058e" containerName="dnsmasq-dns" Mar 20 16:04:30 crc kubenswrapper[4730]: I0320 16:04:30.959972 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xhvz9" Mar 20 16:04:30 crc kubenswrapper[4730]: I0320 16:04:30.973862 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xhvz9"] Mar 20 16:04:31 crc kubenswrapper[4730]: I0320 16:04:31.114561 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xbl4\" (UniqueName: \"kubernetes.io/projected/42c5867a-e6e4-43d8-8529-e75f856fb943-kube-api-access-6xbl4\") pod \"redhat-operators-xhvz9\" (UID: \"42c5867a-e6e4-43d8-8529-e75f856fb943\") " pod="openshift-marketplace/redhat-operators-xhvz9" Mar 20 16:04:31 crc kubenswrapper[4730]: I0320 16:04:31.115047 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c5867a-e6e4-43d8-8529-e75f856fb943-utilities\") pod \"redhat-operators-xhvz9\" (UID: \"42c5867a-e6e4-43d8-8529-e75f856fb943\") " pod="openshift-marketplace/redhat-operators-xhvz9" Mar 20 16:04:31 crc kubenswrapper[4730]: I0320 16:04:31.115372 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c5867a-e6e4-43d8-8529-e75f856fb943-catalog-content\") pod \"redhat-operators-xhvz9\" (UID: \"42c5867a-e6e4-43d8-8529-e75f856fb943\") " pod="openshift-marketplace/redhat-operators-xhvz9" Mar 20 16:04:31 crc kubenswrapper[4730]: I0320 16:04:31.217481 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c5867a-e6e4-43d8-8529-e75f856fb943-utilities\") pod \"redhat-operators-xhvz9\" (UID: \"42c5867a-e6e4-43d8-8529-e75f856fb943\") " pod="openshift-marketplace/redhat-operators-xhvz9" Mar 20 16:04:31 crc kubenswrapper[4730]: I0320 16:04:31.217573 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c5867a-e6e4-43d8-8529-e75f856fb943-catalog-content\") pod \"redhat-operators-xhvz9\" (UID: \"42c5867a-e6e4-43d8-8529-e75f856fb943\") " pod="openshift-marketplace/redhat-operators-xhvz9" Mar 20 16:04:31 crc kubenswrapper[4730]: I0320 16:04:31.217624 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xbl4\" (UniqueName: \"kubernetes.io/projected/42c5867a-e6e4-43d8-8529-e75f856fb943-kube-api-access-6xbl4\") pod \"redhat-operators-xhvz9\" (UID: \"42c5867a-e6e4-43d8-8529-e75f856fb943\") " pod="openshift-marketplace/redhat-operators-xhvz9" Mar 20 16:04:31 crc kubenswrapper[4730]: I0320 16:04:31.217987 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c5867a-e6e4-43d8-8529-e75f856fb943-utilities\") pod \"redhat-operators-xhvz9\" (UID: \"42c5867a-e6e4-43d8-8529-e75f856fb943\") " pod="openshift-marketplace/redhat-operators-xhvz9" Mar 20 16:04:31 crc kubenswrapper[4730]: I0320 16:04:31.218263 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c5867a-e6e4-43d8-8529-e75f856fb943-catalog-content\") pod \"redhat-operators-xhvz9\" (UID: \"42c5867a-e6e4-43d8-8529-e75f856fb943\") " pod="openshift-marketplace/redhat-operators-xhvz9" Mar 20 16:04:31 crc kubenswrapper[4730]: I0320 16:04:31.246079 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xbl4\" (UniqueName: \"kubernetes.io/projected/42c5867a-e6e4-43d8-8529-e75f856fb943-kube-api-access-6xbl4\") pod \"redhat-operators-xhvz9\" (UID: \"42c5867a-e6e4-43d8-8529-e75f856fb943\") " pod="openshift-marketplace/redhat-operators-xhvz9" Mar 20 16:04:31 crc kubenswrapper[4730]: I0320 16:04:31.323399 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xhvz9" Mar 20 16:04:31 crc kubenswrapper[4730]: I0320 16:04:31.787773 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xhvz9"] Mar 20 16:04:32 crc kubenswrapper[4730]: I0320 16:04:32.341734 4730 generic.go:334] "Generic (PLEG): container finished" podID="42c5867a-e6e4-43d8-8529-e75f856fb943" containerID="2e057f74260f665473cbacc2a40a935ef6c07128483ad035e7764c0000379270" exitCode=0 Mar 20 16:04:32 crc kubenswrapper[4730]: I0320 16:04:32.341910 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xhvz9" event={"ID":"42c5867a-e6e4-43d8-8529-e75f856fb943","Type":"ContainerDied","Data":"2e057f74260f665473cbacc2a40a935ef6c07128483ad035e7764c0000379270"} Mar 20 16:04:32 crc kubenswrapper[4730]: I0320 16:04:32.342108 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xhvz9" event={"ID":"42c5867a-e6e4-43d8-8529-e75f856fb943","Type":"ContainerStarted","Data":"420ee018ee3dd1e22e20f8b86a79c9d53492851583d49970e0c00652b3d5f76f"} Mar 20 16:04:32 crc kubenswrapper[4730]: I0320 16:04:32.764011 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg"] Mar 20 16:04:32 crc kubenswrapper[4730]: I0320 16:04:32.767550 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg" Mar 20 16:04:32 crc kubenswrapper[4730]: I0320 16:04:32.772728 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvsxx" Mar 20 16:04:32 crc kubenswrapper[4730]: I0320 16:04:32.772898 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 20 16:04:32 crc kubenswrapper[4730]: I0320 16:04:32.772961 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 20 16:04:32 crc kubenswrapper[4730]: I0320 16:04:32.774283 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 20 16:04:32 crc kubenswrapper[4730]: I0320 16:04:32.776507 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg"] Mar 20 16:04:32 crc kubenswrapper[4730]: I0320 16:04:32.953224 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16667e9d-1075-4c26-8002-61c737a8f76a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg\" (UID: \"16667e9d-1075-4c26-8002-61c737a8f76a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg" Mar 20 16:04:32 crc kubenswrapper[4730]: I0320 16:04:32.953619 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16667e9d-1075-4c26-8002-61c737a8f76a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg\" (UID: \"16667e9d-1075-4c26-8002-61c737a8f76a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg" Mar 20 16:04:32 crc kubenswrapper[4730]: I0320 16:04:32.953804 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16667e9d-1075-4c26-8002-61c737a8f76a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg\" (UID: \"16667e9d-1075-4c26-8002-61c737a8f76a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg" Mar 20 16:04:32 crc kubenswrapper[4730]: I0320 16:04:32.953986 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5psf\" (UniqueName: \"kubernetes.io/projected/16667e9d-1075-4c26-8002-61c737a8f76a-kube-api-access-x5psf\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg\" (UID: \"16667e9d-1075-4c26-8002-61c737a8f76a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg" Mar 20 16:04:33 crc kubenswrapper[4730]: I0320 16:04:33.056018 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16667e9d-1075-4c26-8002-61c737a8f76a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg\" (UID: \"16667e9d-1075-4c26-8002-61c737a8f76a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg" Mar 20 16:04:33 crc kubenswrapper[4730]: I0320 16:04:33.057375 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5psf\" (UniqueName: \"kubernetes.io/projected/16667e9d-1075-4c26-8002-61c737a8f76a-kube-api-access-x5psf\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg\" (UID: \"16667e9d-1075-4c26-8002-61c737a8f76a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg" Mar 20 16:04:33 crc kubenswrapper[4730]: I0320 16:04:33.057516 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16667e9d-1075-4c26-8002-61c737a8f76a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg\" (UID: \"16667e9d-1075-4c26-8002-61c737a8f76a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg" Mar 20 16:04:33 crc kubenswrapper[4730]: I0320 16:04:33.057576 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16667e9d-1075-4c26-8002-61c737a8f76a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg\" (UID: \"16667e9d-1075-4c26-8002-61c737a8f76a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg" Mar 20 16:04:33 crc kubenswrapper[4730]: I0320 16:04:33.063127 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16667e9d-1075-4c26-8002-61c737a8f76a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg\" (UID: \"16667e9d-1075-4c26-8002-61c737a8f76a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg" Mar 20 16:04:33 crc kubenswrapper[4730]: I0320 16:04:33.064997 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16667e9d-1075-4c26-8002-61c737a8f76a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg\" (UID: \"16667e9d-1075-4c26-8002-61c737a8f76a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg" Mar 20 16:04:33 crc kubenswrapper[4730]: I0320 16:04:33.072550 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5psf\" (UniqueName: \"kubernetes.io/projected/16667e9d-1075-4c26-8002-61c737a8f76a-kube-api-access-x5psf\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg\" (UID: \"16667e9d-1075-4c26-8002-61c737a8f76a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg" Mar 20 16:04:33 crc kubenswrapper[4730]: I0320 16:04:33.073532 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16667e9d-1075-4c26-8002-61c737a8f76a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg\" (UID: \"16667e9d-1075-4c26-8002-61c737a8f76a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg" Mar 20 16:04:33 crc kubenswrapper[4730]: I0320 16:04:33.090498 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg" Mar 20 16:04:33 crc kubenswrapper[4730]: I0320 16:04:33.678708 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg"] Mar 20 16:04:34 crc kubenswrapper[4730]: I0320 16:04:34.379642 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xhvz9" event={"ID":"42c5867a-e6e4-43d8-8529-e75f856fb943","Type":"ContainerStarted","Data":"61b00a2f85c0359c92c4dd158661d158b1cc1247d0b76c3308b6285914985d68"} Mar 20 16:04:34 crc kubenswrapper[4730]: I0320 16:04:34.382377 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg" event={"ID":"16667e9d-1075-4c26-8002-61c737a8f76a","Type":"ContainerStarted","Data":"494267ff651b1f0f1a44ff0b53d0791cf31d3778677557978f567145c3808dbc"} Mar 20 16:04:36 crc kubenswrapper[4730]: I0320 16:04:36.415787 4730 generic.go:334] "Generic (PLEG): container finished" podID="42c5867a-e6e4-43d8-8529-e75f856fb943" containerID="61b00a2f85c0359c92c4dd158661d158b1cc1247d0b76c3308b6285914985d68" exitCode=0 Mar 20 16:04:36 crc kubenswrapper[4730]: I0320 16:04:36.415905 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xhvz9" event={"ID":"42c5867a-e6e4-43d8-8529-e75f856fb943","Type":"ContainerDied","Data":"61b00a2f85c0359c92c4dd158661d158b1cc1247d0b76c3308b6285914985d68"} Mar 20 16:04:40 crc kubenswrapper[4730]: I0320 16:04:40.199330 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="707f8f93-76f2-4472-a015-5dccae194c5e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.230:5671: connect: connection refused" Mar 20 16:04:40 crc kubenswrapper[4730]: I0320 16:04:40.249508 4730 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="b92f799a-be4e-45a1-9e2e-c93c4992c9ce" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.231:5671: connect: connection refused" Mar 20 16:04:42 crc kubenswrapper[4730]: I0320 16:04:42.880669 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:04:42 crc kubenswrapper[4730]: I0320 16:04:42.881240 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:04:43 crc kubenswrapper[4730]: I0320 16:04:43.501926 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xhvz9" event={"ID":"42c5867a-e6e4-43d8-8529-e75f856fb943","Type":"ContainerStarted","Data":"07f978941b4653bb8671c40ee0f55a53c4f8e766a6251cc1507c3fb977516d33"} Mar 20 16:04:43 crc kubenswrapper[4730]: I0320 16:04:43.504405 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg" event={"ID":"16667e9d-1075-4c26-8002-61c737a8f76a","Type":"ContainerStarted","Data":"605e5e943174f16d82be8a689b82d05d8b0a01532e03b17f3a5561bc1ac00775"} Mar 20 16:04:43 crc kubenswrapper[4730]: I0320 16:04:43.520784 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xhvz9" podStartSLOduration=2.672452167 podStartE2EDuration="13.520766117s" podCreationTimestamp="2026-03-20 16:04:30 +0000 UTC" firstStartedPulling="2026-03-20 16:04:32.343629329 +0000 UTC m=+1531.557000698" lastFinishedPulling="2026-03-20 16:04:43.191943279 +0000 UTC m=+1542.405314648" observedRunningTime="2026-03-20 16:04:43.518965426 +0000 UTC m=+1542.732336805" watchObservedRunningTime="2026-03-20 16:04:43.520766117 +0000 UTC m=+1542.734137486" Mar 20 16:04:43 crc kubenswrapper[4730]: I0320 16:04:43.540060 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg" podStartSLOduration=1.987685247 podStartE2EDuration="11.540030568s" podCreationTimestamp="2026-03-20 16:04:32 +0000 UTC" firstStartedPulling="2026-03-20 16:04:33.685558293 +0000 UTC m=+1532.898929662" lastFinishedPulling="2026-03-20 16:04:43.237903604 +0000 UTC m=+1542.451274983" observedRunningTime="2026-03-20 16:04:43.539091221 +0000 UTC m=+1542.752462610" watchObservedRunningTime="2026-03-20 16:04:43.540030568 +0000 UTC m=+1542.753401947" Mar 20 16:04:46 crc kubenswrapper[4730]: I0320 16:04:46.192171 4730 scope.go:117] "RemoveContainer" containerID="caec51e4f1b5d91020f11b5970f403cd0356b8c6fa1f260cecf4ea6e449980f1" Mar 20 16:04:46 crc kubenswrapper[4730]: I0320 16:04:46.275402 4730 scope.go:117] "RemoveContainer" containerID="13985a1e2e3d58d396be0af6437cdcdb0bbdea54308502442707c077b36e9713" Mar 20 16:04:50 crc kubenswrapper[4730]: I0320 16:04:50.199437 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 20 16:04:50 crc kubenswrapper[4730]: I0320 16:04:50.249516 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 20 16:04:51 crc kubenswrapper[4730]: I0320 16:04:51.324371 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xhvz9" Mar 20 16:04:51 crc kubenswrapper[4730]: I0320 16:04:51.324710 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xhvz9" Mar 20 16:04:52 crc kubenswrapper[4730]: I0320 16:04:52.378600 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xhvz9" podUID="42c5867a-e6e4-43d8-8529-e75f856fb943" containerName="registry-server" probeResult="failure" output=< Mar 20 16:04:52 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Mar 20 16:04:52 crc kubenswrapper[4730]: > Mar 20 16:04:54 crc kubenswrapper[4730]: I0320 16:04:54.611430 4730 generic.go:334] "Generic (PLEG): container finished" podID="16667e9d-1075-4c26-8002-61c737a8f76a" containerID="605e5e943174f16d82be8a689b82d05d8b0a01532e03b17f3a5561bc1ac00775" exitCode=0 Mar 20 16:04:54 crc kubenswrapper[4730]: I0320 16:04:54.611519 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg" event={"ID":"16667e9d-1075-4c26-8002-61c737a8f76a","Type":"ContainerDied","Data":"605e5e943174f16d82be8a689b82d05d8b0a01532e03b17f3a5561bc1ac00775"} Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.447909 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg" Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.545854 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5psf\" (UniqueName: \"kubernetes.io/projected/16667e9d-1075-4c26-8002-61c737a8f76a-kube-api-access-x5psf\") pod \"16667e9d-1075-4c26-8002-61c737a8f76a\" (UID: \"16667e9d-1075-4c26-8002-61c737a8f76a\") " Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.546035 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16667e9d-1075-4c26-8002-61c737a8f76a-repo-setup-combined-ca-bundle\") pod \"16667e9d-1075-4c26-8002-61c737a8f76a\" (UID: \"16667e9d-1075-4c26-8002-61c737a8f76a\") " Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.546078 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16667e9d-1075-4c26-8002-61c737a8f76a-inventory\") pod \"16667e9d-1075-4c26-8002-61c737a8f76a\" (UID: \"16667e9d-1075-4c26-8002-61c737a8f76a\") " Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.546184 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16667e9d-1075-4c26-8002-61c737a8f76a-ssh-key-openstack-edpm-ipam\") pod \"16667e9d-1075-4c26-8002-61c737a8f76a\" (UID: \"16667e9d-1075-4c26-8002-61c737a8f76a\") " Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.552023 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16667e9d-1075-4c26-8002-61c737a8f76a-kube-api-access-x5psf" (OuterVolumeSpecName: "kube-api-access-x5psf") pod "16667e9d-1075-4c26-8002-61c737a8f76a" (UID: "16667e9d-1075-4c26-8002-61c737a8f76a"). InnerVolumeSpecName "kube-api-access-x5psf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.552747 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16667e9d-1075-4c26-8002-61c737a8f76a-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "16667e9d-1075-4c26-8002-61c737a8f76a" (UID: "16667e9d-1075-4c26-8002-61c737a8f76a"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.577893 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16667e9d-1075-4c26-8002-61c737a8f76a-inventory" (OuterVolumeSpecName: "inventory") pod "16667e9d-1075-4c26-8002-61c737a8f76a" (UID: "16667e9d-1075-4c26-8002-61c737a8f76a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.582997 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16667e9d-1075-4c26-8002-61c737a8f76a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "16667e9d-1075-4c26-8002-61c737a8f76a" (UID: "16667e9d-1075-4c26-8002-61c737a8f76a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.629504 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg" event={"ID":"16667e9d-1075-4c26-8002-61c737a8f76a","Type":"ContainerDied","Data":"494267ff651b1f0f1a44ff0b53d0791cf31d3778677557978f567145c3808dbc"} Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.629551 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg" Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.629562 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="494267ff651b1f0f1a44ff0b53d0791cf31d3778677557978f567145c3808dbc" Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.655994 4730 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16667e9d-1075-4c26-8002-61c737a8f76a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.656037 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5psf\" (UniqueName: \"kubernetes.io/projected/16667e9d-1075-4c26-8002-61c737a8f76a-kube-api-access-x5psf\") on node \"crc\" DevicePath \"\"" Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.656050 4730 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16667e9d-1075-4c26-8002-61c737a8f76a-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.656064 4730 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16667e9d-1075-4c26-8002-61c737a8f76a-inventory\") on node \"crc\" DevicePath \"\"" Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.745639 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-q8dm9"] Mar 20 16:04:56 crc kubenswrapper[4730]: E0320 16:04:56.746100 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16667e9d-1075-4c26-8002-61c737a8f76a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.746123 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="16667e9d-1075-4c26-8002-61c737a8f76a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.746381 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="16667e9d-1075-4c26-8002-61c737a8f76a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.747089 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-q8dm9" Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.749186 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.749402 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.749545 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvsxx" Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.749709 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.758080 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-q8dm9"] Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.859998 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcgbk\" (UniqueName: \"kubernetes.io/projected/129ce6b6-b215-4ca0-9583-78aae3c2371c-kube-api-access-pcgbk\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-q8dm9\" (UID: \"129ce6b6-b215-4ca0-9583-78aae3c2371c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-q8dm9" Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.860061 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/129ce6b6-b215-4ca0-9583-78aae3c2371c-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-q8dm9\" (UID: \"129ce6b6-b215-4ca0-9583-78aae3c2371c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-q8dm9" Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.860188 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/129ce6b6-b215-4ca0-9583-78aae3c2371c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-q8dm9\" (UID: \"129ce6b6-b215-4ca0-9583-78aae3c2371c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-q8dm9" Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.961642 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/129ce6b6-b215-4ca0-9583-78aae3c2371c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-q8dm9\" (UID: \"129ce6b6-b215-4ca0-9583-78aae3c2371c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-q8dm9" Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.961745 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcgbk\" (UniqueName: \"kubernetes.io/projected/129ce6b6-b215-4ca0-9583-78aae3c2371c-kube-api-access-pcgbk\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-q8dm9\" (UID: \"129ce6b6-b215-4ca0-9583-78aae3c2371c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-q8dm9" Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.961784 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/129ce6b6-b215-4ca0-9583-78aae3c2371c-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-q8dm9\" (UID: \"129ce6b6-b215-4ca0-9583-78aae3c2371c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-q8dm9" Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.965402 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/129ce6b6-b215-4ca0-9583-78aae3c2371c-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-q8dm9\" (UID: \"129ce6b6-b215-4ca0-9583-78aae3c2371c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-q8dm9" Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.965847 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/129ce6b6-b215-4ca0-9583-78aae3c2371c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-q8dm9\" (UID: \"129ce6b6-b215-4ca0-9583-78aae3c2371c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-q8dm9" Mar 20 16:04:56 crc kubenswrapper[4730]: I0320 16:04:56.982234 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcgbk\" (UniqueName: \"kubernetes.io/projected/129ce6b6-b215-4ca0-9583-78aae3c2371c-kube-api-access-pcgbk\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-q8dm9\" (UID: \"129ce6b6-b215-4ca0-9583-78aae3c2371c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-q8dm9" Mar 20 16:04:57 crc kubenswrapper[4730]: I0320 16:04:57.063759 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-q8dm9" Mar 20 16:04:57 crc kubenswrapper[4730]: I0320 16:04:57.554977 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-q8dm9"] Mar 20 16:04:57 crc kubenswrapper[4730]: W0320 16:04:57.560625 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod129ce6b6_b215_4ca0_9583_78aae3c2371c.slice/crio-262a083a2b3b3ee1d0eda393fa4359a321eb491526e40334edf47918d7caf764 WatchSource:0}: Error finding container 262a083a2b3b3ee1d0eda393fa4359a321eb491526e40334edf47918d7caf764: Status 404 returned error can't find the container with id 262a083a2b3b3ee1d0eda393fa4359a321eb491526e40334edf47918d7caf764 Mar 20 16:04:57 crc kubenswrapper[4730]: I0320 16:04:57.640763 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-q8dm9" event={"ID":"129ce6b6-b215-4ca0-9583-78aae3c2371c","Type":"ContainerStarted","Data":"262a083a2b3b3ee1d0eda393fa4359a321eb491526e40334edf47918d7caf764"} Mar 20 16:05:00 crc kubenswrapper[4730]: I0320 16:05:00.677320 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-q8dm9" event={"ID":"129ce6b6-b215-4ca0-9583-78aae3c2371c","Type":"ContainerStarted","Data":"7d0ef78deb033d3651878fe486ebe9545f0d71329e31ee70e6f2dbfebae138c9"} Mar 20 16:05:00 crc kubenswrapper[4730]: I0320 16:05:00.717197 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-q8dm9" podStartSLOduration=2.692386946 podStartE2EDuration="4.71717121s" podCreationTimestamp="2026-03-20 16:04:56 +0000 UTC" firstStartedPulling="2026-03-20 16:04:57.564286509 +0000 UTC m=+1556.777657878" lastFinishedPulling="2026-03-20 16:04:59.589070773 +0000 UTC m=+1558.802442142" observedRunningTime="2026-03-20 16:05:00.707410301 +0000 UTC m=+1559.920781680" watchObservedRunningTime="2026-03-20 16:05:00.71717121 +0000 UTC m=+1559.930542599" Mar 20 16:05:01 crc kubenswrapper[4730]: I0320 16:05:01.385869 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xhvz9" Mar 20 16:05:01 crc kubenswrapper[4730]: I0320 16:05:01.443783 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xhvz9" Mar 20 16:05:02 crc kubenswrapper[4730]: I0320 16:05:02.158677 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xhvz9"] Mar 20 16:05:02 crc kubenswrapper[4730]: I0320 16:05:02.694212 4730 generic.go:334] "Generic (PLEG): container finished" podID="129ce6b6-b215-4ca0-9583-78aae3c2371c" containerID="7d0ef78deb033d3651878fe486ebe9545f0d71329e31ee70e6f2dbfebae138c9" exitCode=0 Mar 20 16:05:02 crc kubenswrapper[4730]: I0320 16:05:02.694303 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-q8dm9" event={"ID":"129ce6b6-b215-4ca0-9583-78aae3c2371c","Type":"ContainerDied","Data":"7d0ef78deb033d3651878fe486ebe9545f0d71329e31ee70e6f2dbfebae138c9"} Mar 20 16:05:02 crc kubenswrapper[4730]: I0320 16:05:02.694482 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xhvz9" podUID="42c5867a-e6e4-43d8-8529-e75f856fb943" containerName="registry-server" containerID="cri-o://07f978941b4653bb8671c40ee0f55a53c4f8e766a6251cc1507c3fb977516d33" gracePeriod=2 Mar 20 16:05:03 crc kubenswrapper[4730]: I0320 16:05:03.212336 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xhvz9" Mar 20 16:05:03 crc kubenswrapper[4730]: I0320 16:05:03.287317 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c5867a-e6e4-43d8-8529-e75f856fb943-utilities\") pod \"42c5867a-e6e4-43d8-8529-e75f856fb943\" (UID: \"42c5867a-e6e4-43d8-8529-e75f856fb943\") " Mar 20 16:05:03 crc kubenswrapper[4730]: I0320 16:05:03.287459 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xbl4\" (UniqueName: \"kubernetes.io/projected/42c5867a-e6e4-43d8-8529-e75f856fb943-kube-api-access-6xbl4\") pod \"42c5867a-e6e4-43d8-8529-e75f856fb943\" (UID: \"42c5867a-e6e4-43d8-8529-e75f856fb943\") " Mar 20 16:05:03 crc kubenswrapper[4730]: I0320 16:05:03.287752 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c5867a-e6e4-43d8-8529-e75f856fb943-catalog-content\") pod \"42c5867a-e6e4-43d8-8529-e75f856fb943\" (UID: \"42c5867a-e6e4-43d8-8529-e75f856fb943\") " Mar 20 16:05:03 crc kubenswrapper[4730]: I0320 16:05:03.288265 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42c5867a-e6e4-43d8-8529-e75f856fb943-utilities" (OuterVolumeSpecName: "utilities") pod "42c5867a-e6e4-43d8-8529-e75f856fb943" (UID: "42c5867a-e6e4-43d8-8529-e75f856fb943"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:05:03 crc kubenswrapper[4730]: I0320 16:05:03.299480 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c5867a-e6e4-43d8-8529-e75f856fb943-kube-api-access-6xbl4" (OuterVolumeSpecName: "kube-api-access-6xbl4") pod "42c5867a-e6e4-43d8-8529-e75f856fb943" (UID: "42c5867a-e6e4-43d8-8529-e75f856fb943"). InnerVolumeSpecName "kube-api-access-6xbl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:05:03 crc kubenswrapper[4730]: I0320 16:05:03.390142 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c5867a-e6e4-43d8-8529-e75f856fb943-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 16:05:03 crc kubenswrapper[4730]: I0320 16:05:03.390191 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xbl4\" (UniqueName: \"kubernetes.io/projected/42c5867a-e6e4-43d8-8529-e75f856fb943-kube-api-access-6xbl4\") on node \"crc\" DevicePath \"\"" Mar 20 16:05:03 crc kubenswrapper[4730]: I0320 16:05:03.430591 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42c5867a-e6e4-43d8-8529-e75f856fb943-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42c5867a-e6e4-43d8-8529-e75f856fb943" (UID: "42c5867a-e6e4-43d8-8529-e75f856fb943"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:05:03 crc kubenswrapper[4730]: I0320 16:05:03.491653 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c5867a-e6e4-43d8-8529-e75f856fb943-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 16:05:03 crc kubenswrapper[4730]: I0320 16:05:03.708599 4730 generic.go:334] "Generic (PLEG): container finished" podID="42c5867a-e6e4-43d8-8529-e75f856fb943" containerID="07f978941b4653bb8671c40ee0f55a53c4f8e766a6251cc1507c3fb977516d33" exitCode=0 Mar 20 16:05:03 crc kubenswrapper[4730]: I0320 16:05:03.708670 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xhvz9" event={"ID":"42c5867a-e6e4-43d8-8529-e75f856fb943","Type":"ContainerDied","Data":"07f978941b4653bb8671c40ee0f55a53c4f8e766a6251cc1507c3fb977516d33"} Mar 20 16:05:03 crc kubenswrapper[4730]: I0320 16:05:03.708741 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xhvz9" Mar 20 16:05:03 crc kubenswrapper[4730]: I0320 16:05:03.709025 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xhvz9" event={"ID":"42c5867a-e6e4-43d8-8529-e75f856fb943","Type":"ContainerDied","Data":"420ee018ee3dd1e22e20f8b86a79c9d53492851583d49970e0c00652b3d5f76f"} Mar 20 16:05:03 crc kubenswrapper[4730]: I0320 16:05:03.709081 4730 scope.go:117] "RemoveContainer" containerID="07f978941b4653bb8671c40ee0f55a53c4f8e766a6251cc1507c3fb977516d33" Mar 20 16:05:03 crc kubenswrapper[4730]: I0320 16:05:03.737036 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xhvz9"] Mar 20 16:05:03 crc kubenswrapper[4730]: I0320 16:05:03.745873 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xhvz9"] Mar 20 16:05:03 crc kubenswrapper[4730]: I0320 16:05:03.746767 4730 scope.go:117] "RemoveContainer" containerID="61b00a2f85c0359c92c4dd158661d158b1cc1247d0b76c3308b6285914985d68" Mar 20 16:05:03 crc kubenswrapper[4730]: I0320 16:05:03.784229 4730 scope.go:117] "RemoveContainer" containerID="2e057f74260f665473cbacc2a40a935ef6c07128483ad035e7764c0000379270" Mar 20 16:05:03 crc kubenswrapper[4730]: I0320 16:05:03.828704 4730 scope.go:117] "RemoveContainer" containerID="07f978941b4653bb8671c40ee0f55a53c4f8e766a6251cc1507c3fb977516d33" Mar 20 16:05:03 crc kubenswrapper[4730]: E0320 16:05:03.829320 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07f978941b4653bb8671c40ee0f55a53c4f8e766a6251cc1507c3fb977516d33\": container with ID starting with 07f978941b4653bb8671c40ee0f55a53c4f8e766a6251cc1507c3fb977516d33 not found: ID does not exist" containerID="07f978941b4653bb8671c40ee0f55a53c4f8e766a6251cc1507c3fb977516d33" Mar 20 16:05:03 crc kubenswrapper[4730]: I0320 16:05:03.829370 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07f978941b4653bb8671c40ee0f55a53c4f8e766a6251cc1507c3fb977516d33"} err="failed to get container status \"07f978941b4653bb8671c40ee0f55a53c4f8e766a6251cc1507c3fb977516d33\": rpc error: code = NotFound desc = could not find container \"07f978941b4653bb8671c40ee0f55a53c4f8e766a6251cc1507c3fb977516d33\": container with ID starting with 07f978941b4653bb8671c40ee0f55a53c4f8e766a6251cc1507c3fb977516d33 not found: ID does not exist" Mar 20 16:05:03 crc kubenswrapper[4730]: I0320 16:05:03.829405 4730 scope.go:117] "RemoveContainer" containerID="61b00a2f85c0359c92c4dd158661d158b1cc1247d0b76c3308b6285914985d68" Mar 20 16:05:03 crc kubenswrapper[4730]: E0320 16:05:03.829835 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61b00a2f85c0359c92c4dd158661d158b1cc1247d0b76c3308b6285914985d68\": container with ID starting with 61b00a2f85c0359c92c4dd158661d158b1cc1247d0b76c3308b6285914985d68 not found: ID does not exist" containerID="61b00a2f85c0359c92c4dd158661d158b1cc1247d0b76c3308b6285914985d68" Mar 20 16:05:03 crc kubenswrapper[4730]: I0320 16:05:03.829867 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61b00a2f85c0359c92c4dd158661d158b1cc1247d0b76c3308b6285914985d68"} err="failed to get container status \"61b00a2f85c0359c92c4dd158661d158b1cc1247d0b76c3308b6285914985d68\": rpc error: code = NotFound desc = could not find container \"61b00a2f85c0359c92c4dd158661d158b1cc1247d0b76c3308b6285914985d68\": container with ID starting with 61b00a2f85c0359c92c4dd158661d158b1cc1247d0b76c3308b6285914985d68 not found: ID does not exist" Mar 20 16:05:03 crc kubenswrapper[4730]: I0320 16:05:03.829885 4730 scope.go:117] "RemoveContainer" containerID="2e057f74260f665473cbacc2a40a935ef6c07128483ad035e7764c0000379270" Mar 20 16:05:03 crc kubenswrapper[4730]: E0320 16:05:03.830304 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e057f74260f665473cbacc2a40a935ef6c07128483ad035e7764c0000379270\": container with ID starting with 2e057f74260f665473cbacc2a40a935ef6c07128483ad035e7764c0000379270 not found: ID does not exist" containerID="2e057f74260f665473cbacc2a40a935ef6c07128483ad035e7764c0000379270" Mar 20 16:05:03 crc kubenswrapper[4730]: I0320 16:05:03.830346 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e057f74260f665473cbacc2a40a935ef6c07128483ad035e7764c0000379270"} err="failed to get container status \"2e057f74260f665473cbacc2a40a935ef6c07128483ad035e7764c0000379270\": rpc error: code = NotFound desc = could not find container \"2e057f74260f665473cbacc2a40a935ef6c07128483ad035e7764c0000379270\": container with ID starting with 2e057f74260f665473cbacc2a40a935ef6c07128483ad035e7764c0000379270 not found: ID does not exist" Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.156152 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-q8dm9" Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.331009 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcgbk\" (UniqueName: \"kubernetes.io/projected/129ce6b6-b215-4ca0-9583-78aae3c2371c-kube-api-access-pcgbk\") pod \"129ce6b6-b215-4ca0-9583-78aae3c2371c\" (UID: \"129ce6b6-b215-4ca0-9583-78aae3c2371c\") " Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.331225 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/129ce6b6-b215-4ca0-9583-78aae3c2371c-inventory\") pod \"129ce6b6-b215-4ca0-9583-78aae3c2371c\" (UID: \"129ce6b6-b215-4ca0-9583-78aae3c2371c\") " Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.331282 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/129ce6b6-b215-4ca0-9583-78aae3c2371c-ssh-key-openstack-edpm-ipam\") pod \"129ce6b6-b215-4ca0-9583-78aae3c2371c\" (UID: \"129ce6b6-b215-4ca0-9583-78aae3c2371c\") " Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.337402 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/129ce6b6-b215-4ca0-9583-78aae3c2371c-kube-api-access-pcgbk" (OuterVolumeSpecName: "kube-api-access-pcgbk") pod "129ce6b6-b215-4ca0-9583-78aae3c2371c" (UID: "129ce6b6-b215-4ca0-9583-78aae3c2371c"). InnerVolumeSpecName "kube-api-access-pcgbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.360313 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/129ce6b6-b215-4ca0-9583-78aae3c2371c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "129ce6b6-b215-4ca0-9583-78aae3c2371c" (UID: "129ce6b6-b215-4ca0-9583-78aae3c2371c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.368769 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/129ce6b6-b215-4ca0-9583-78aae3c2371c-inventory" (OuterVolumeSpecName: "inventory") pod "129ce6b6-b215-4ca0-9583-78aae3c2371c" (UID: "129ce6b6-b215-4ca0-9583-78aae3c2371c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.434267 4730 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/129ce6b6-b215-4ca0-9583-78aae3c2371c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.434920 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcgbk\" (UniqueName: \"kubernetes.io/projected/129ce6b6-b215-4ca0-9583-78aae3c2371c-kube-api-access-pcgbk\") on node \"crc\" DevicePath \"\"" Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.434967 4730 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/129ce6b6-b215-4ca0-9583-78aae3c2371c-inventory\") on node \"crc\" DevicePath \"\"" Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.718805 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-q8dm9" event={"ID":"129ce6b6-b215-4ca0-9583-78aae3c2371c","Type":"ContainerDied","Data":"262a083a2b3b3ee1d0eda393fa4359a321eb491526e40334edf47918d7caf764"} Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.718836 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-q8dm9" Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.718850 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="262a083a2b3b3ee1d0eda393fa4359a321eb491526e40334edf47918d7caf764" Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.793878 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh"] Mar 20 16:05:04 crc kubenswrapper[4730]: E0320 16:05:04.794365 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c5867a-e6e4-43d8-8529-e75f856fb943" containerName="registry-server" Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.794385 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c5867a-e6e4-43d8-8529-e75f856fb943" containerName="registry-server" Mar 20 16:05:04 crc kubenswrapper[4730]: E0320 16:05:04.794409 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="129ce6b6-b215-4ca0-9583-78aae3c2371c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.794418 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="129ce6b6-b215-4ca0-9583-78aae3c2371c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 20 16:05:04 crc kubenswrapper[4730]: E0320 16:05:04.794432 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c5867a-e6e4-43d8-8529-e75f856fb943" containerName="extract-utilities" Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.794439 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c5867a-e6e4-43d8-8529-e75f856fb943" containerName="extract-utilities" Mar 20 16:05:04 crc kubenswrapper[4730]: E0320 16:05:04.794466 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c5867a-e6e4-43d8-8529-e75f856fb943" containerName="extract-content" Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.794473 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c5867a-e6e4-43d8-8529-e75f856fb943" containerName="extract-content" Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.794720 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c5867a-e6e4-43d8-8529-e75f856fb943" containerName="registry-server" Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.794764 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="129ce6b6-b215-4ca0-9583-78aae3c2371c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.795627 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh" Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.800782 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.801102 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvsxx" Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.801881 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.802113 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.815703 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh"] Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.945988 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c1c649-4459-497e-ba5b-245a4eb5ad04-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh\" (UID: \"73c1c649-4459-497e-ba5b-245a4eb5ad04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh" Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.946236 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73c1c649-4459-497e-ba5b-245a4eb5ad04-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh\" (UID: \"73c1c649-4459-497e-ba5b-245a4eb5ad04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh" Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.946543 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc4zf\" (UniqueName: \"kubernetes.io/projected/73c1c649-4459-497e-ba5b-245a4eb5ad04-kube-api-access-nc4zf\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh\" (UID: \"73c1c649-4459-497e-ba5b-245a4eb5ad04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh" Mar 20 16:05:04 crc kubenswrapper[4730]: I0320 16:05:04.946598 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73c1c649-4459-497e-ba5b-245a4eb5ad04-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh\" (UID: \"73c1c649-4459-497e-ba5b-245a4eb5ad04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh" Mar 20 16:05:05 crc kubenswrapper[4730]: I0320 16:05:05.049583 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c1c649-4459-497e-ba5b-245a4eb5ad04-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh\" (UID: \"73c1c649-4459-497e-ba5b-245a4eb5ad04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh" Mar 20 16:05:05 crc kubenswrapper[4730]: I0320 16:05:05.049680 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73c1c649-4459-497e-ba5b-245a4eb5ad04-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh\" (UID: \"73c1c649-4459-497e-ba5b-245a4eb5ad04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh" Mar 20 16:05:05 crc kubenswrapper[4730]: I0320 16:05:05.049766 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc4zf\" (UniqueName: \"kubernetes.io/projected/73c1c649-4459-497e-ba5b-245a4eb5ad04-kube-api-access-nc4zf\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh\" (UID: \"73c1c649-4459-497e-ba5b-245a4eb5ad04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh" Mar 20 16:05:05 crc kubenswrapper[4730]: I0320 16:05:05.049792 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73c1c649-4459-497e-ba5b-245a4eb5ad04-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh\" (UID: \"73c1c649-4459-497e-ba5b-245a4eb5ad04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh" Mar 20 16:05:05 crc kubenswrapper[4730]: I0320 16:05:05.054276 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c1c649-4459-497e-ba5b-245a4eb5ad04-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh\" (UID: \"73c1c649-4459-497e-ba5b-245a4eb5ad04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh" Mar 20 16:05:05 crc kubenswrapper[4730]: I0320 16:05:05.054983 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73c1c649-4459-497e-ba5b-245a4eb5ad04-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh\" (UID: \"73c1c649-4459-497e-ba5b-245a4eb5ad04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh" Mar 20 16:05:05 crc kubenswrapper[4730]: I0320 16:05:05.056850 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73c1c649-4459-497e-ba5b-245a4eb5ad04-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh\" (UID: \"73c1c649-4459-497e-ba5b-245a4eb5ad04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh" Mar 20 16:05:05 crc kubenswrapper[4730]: I0320 16:05:05.065554 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc4zf\" (UniqueName: \"kubernetes.io/projected/73c1c649-4459-497e-ba5b-245a4eb5ad04-kube-api-access-nc4zf\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh\" (UID: \"73c1c649-4459-497e-ba5b-245a4eb5ad04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh" Mar 20 16:05:05 crc kubenswrapper[4730]: I0320 16:05:05.121523 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh" Mar 20 16:05:05 crc kubenswrapper[4730]: I0320 16:05:05.550746 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42c5867a-e6e4-43d8-8529-e75f856fb943" path="/var/lib/kubelet/pods/42c5867a-e6e4-43d8-8529-e75f856fb943/volumes" Mar 20 16:05:05 crc kubenswrapper[4730]: I0320 16:05:05.665210 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh"] Mar 20 16:05:05 crc kubenswrapper[4730]: I0320 16:05:05.730724 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh" event={"ID":"73c1c649-4459-497e-ba5b-245a4eb5ad04","Type":"ContainerStarted","Data":"cd8b0f41c6bc95bfee5c3d1ca2c479defe33b5a42deb27db28effb10edd49695"} Mar 20 16:05:06 crc kubenswrapper[4730]: I0320 16:05:06.758646 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh" event={"ID":"73c1c649-4459-497e-ba5b-245a4eb5ad04","Type":"ContainerStarted","Data":"70f50b402151af161dd2f676f7cb4e3b396dcdbae945eab08ae85bd1aeb89352"} Mar 20 16:05:06 crc kubenswrapper[4730]: I0320 16:05:06.774817 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh" podStartSLOduration=2.106650344 podStartE2EDuration="2.774801161s" podCreationTimestamp="2026-03-20 16:05:04 +0000 UTC" firstStartedPulling="2026-03-20 16:05:05.670412632 +0000 UTC m=+1564.883784001" lastFinishedPulling="2026-03-20 16:05:06.338563459 +0000 UTC m=+1565.551934818" observedRunningTime="2026-03-20 16:05:06.77373829 +0000 UTC m=+1565.987109669" watchObservedRunningTime="2026-03-20 16:05:06.774801161 +0000 UTC m=+1565.988172530" Mar 20 16:05:12 crc kubenswrapper[4730]: I0320 16:05:12.879821 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:05:12 crc kubenswrapper[4730]: I0320 16:05:12.880734 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:05:12 crc kubenswrapper[4730]: I0320 16:05:12.880851 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 16:05:12 crc kubenswrapper[4730]: I0320 16:05:12.882157 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fb7cef3383bd559653e29a00e754a8c3366946a9ed7a655b7b70a7214aec8143"} pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 16:05:12 crc kubenswrapper[4730]: I0320 16:05:12.882230 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" containerID="cri-o://fb7cef3383bd559653e29a00e754a8c3366946a9ed7a655b7b70a7214aec8143" gracePeriod=600 Mar 20 16:05:13 crc kubenswrapper[4730]: I0320 16:05:13.853541 4730 generic.go:334] "Generic (PLEG): container finished" podID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerID="fb7cef3383bd559653e29a00e754a8c3366946a9ed7a655b7b70a7214aec8143" exitCode=0 Mar 20 16:05:13 crc kubenswrapper[4730]: I0320 16:05:13.853870 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerDied","Data":"fb7cef3383bd559653e29a00e754a8c3366946a9ed7a655b7b70a7214aec8143"} Mar 20 16:05:13 crc kubenswrapper[4730]: I0320 16:05:13.853966 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerStarted","Data":"4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120"} Mar 20 16:05:13 crc kubenswrapper[4730]: I0320 16:05:13.853999 4730 scope.go:117] "RemoveContainer" containerID="2aab75ddb2e10e731a7d582f69fae06a40e7e5a6270ff47496bdac5fb9c6ebfd" Mar 20 16:05:46 crc kubenswrapper[4730]: I0320 16:05:46.414909 4730 scope.go:117] "RemoveContainer" containerID="e7513ea86a1e88bc7e61a8263e52b8513c6fca0f458503f246a5451604a802da" Mar 20 16:05:46 crc kubenswrapper[4730]: I0320 16:05:46.481610 4730 scope.go:117] "RemoveContainer" containerID="3a783d296547ab247634b62ed131b57fa9392453e5aadc95036d56c15ea1686f" Mar 20 16:05:46 crc kubenswrapper[4730]: I0320 16:05:46.510110 4730 scope.go:117] "RemoveContainer" containerID="7f523e2f068601b64366f703836d659d597d40e8112309dd07122d42b1769869" Mar 20 16:06:00 crc kubenswrapper[4730]: I0320 16:06:00.144766 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567046-8f6sv"] Mar 20 16:06:00 crc kubenswrapper[4730]: I0320 16:06:00.147073 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567046-8f6sv" Mar 20 16:06:00 crc kubenswrapper[4730]: I0320 16:06:00.149233 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:06:00 crc kubenswrapper[4730]: I0320 16:06:00.149436 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:06:00 crc kubenswrapper[4730]: I0320 16:06:00.150140 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:06:00 crc kubenswrapper[4730]: I0320 16:06:00.163416 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567046-8f6sv"] Mar 20 16:06:00 crc kubenswrapper[4730]: I0320 16:06:00.301481 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2gp6\" (UniqueName: \"kubernetes.io/projected/8dacfdca-1b6e-4336-8089-722d36388128-kube-api-access-d2gp6\") pod \"auto-csr-approver-29567046-8f6sv\" (UID: \"8dacfdca-1b6e-4336-8089-722d36388128\") " pod="openshift-infra/auto-csr-approver-29567046-8f6sv" Mar 20 16:06:00 crc kubenswrapper[4730]: I0320 16:06:00.403436 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2gp6\" (UniqueName: \"kubernetes.io/projected/8dacfdca-1b6e-4336-8089-722d36388128-kube-api-access-d2gp6\") pod \"auto-csr-approver-29567046-8f6sv\" (UID: \"8dacfdca-1b6e-4336-8089-722d36388128\") " pod="openshift-infra/auto-csr-approver-29567046-8f6sv" Mar 20 16:06:00 crc kubenswrapper[4730]: I0320 16:06:00.424049 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2gp6\" (UniqueName: \"kubernetes.io/projected/8dacfdca-1b6e-4336-8089-722d36388128-kube-api-access-d2gp6\") pod \"auto-csr-approver-29567046-8f6sv\" (UID: \"8dacfdca-1b6e-4336-8089-722d36388128\") " pod="openshift-infra/auto-csr-approver-29567046-8f6sv" Mar 20 16:06:00 crc kubenswrapper[4730]: I0320 16:06:00.466012 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567046-8f6sv" Mar 20 16:06:00 crc kubenswrapper[4730]: I0320 16:06:00.905372 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567046-8f6sv"] Mar 20 16:06:00 crc kubenswrapper[4730]: W0320 16:06:00.910150 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8dacfdca_1b6e_4336_8089_722d36388128.slice/crio-b73bb10ec72b32b9bfcb5be49e347491430ce3cef3e789d130da7b549ea69c8b WatchSource:0}: Error finding container b73bb10ec72b32b9bfcb5be49e347491430ce3cef3e789d130da7b549ea69c8b: Status 404 returned error can't find the container with id b73bb10ec72b32b9bfcb5be49e347491430ce3cef3e789d130da7b549ea69c8b Mar 20 16:06:01 crc kubenswrapper[4730]: I0320 16:06:01.343123 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567046-8f6sv" event={"ID":"8dacfdca-1b6e-4336-8089-722d36388128","Type":"ContainerStarted","Data":"b73bb10ec72b32b9bfcb5be49e347491430ce3cef3e789d130da7b549ea69c8b"} Mar 20 16:06:02 crc kubenswrapper[4730]: I0320 16:06:02.353379 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567046-8f6sv" event={"ID":"8dacfdca-1b6e-4336-8089-722d36388128","Type":"ContainerStarted","Data":"6e67353e8a39d519cf4269a9771ca30ae4c8d30443c293283645c96cf02f2776"} Mar 20 16:06:02 crc kubenswrapper[4730]: I0320 16:06:02.370940 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29567046-8f6sv" podStartSLOduration=1.406102738 podStartE2EDuration="2.370919023s" podCreationTimestamp="2026-03-20 16:06:00 +0000 UTC" firstStartedPulling="2026-03-20 16:06:00.912762323 +0000 UTC m=+1620.126133692" lastFinishedPulling="2026-03-20 16:06:01.877578588 +0000 UTC m=+1621.090949977" observedRunningTime="2026-03-20 16:06:02.367598638 +0000 UTC m=+1621.580970027" watchObservedRunningTime="2026-03-20 16:06:02.370919023 +0000 UTC m=+1621.584290402" Mar 20 16:06:03 crc kubenswrapper[4730]: I0320 16:06:03.363909 4730 generic.go:334] "Generic (PLEG): container finished" podID="8dacfdca-1b6e-4336-8089-722d36388128" containerID="6e67353e8a39d519cf4269a9771ca30ae4c8d30443c293283645c96cf02f2776" exitCode=0 Mar 20 16:06:03 crc kubenswrapper[4730]: I0320 16:06:03.363974 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567046-8f6sv" event={"ID":"8dacfdca-1b6e-4336-8089-722d36388128","Type":"ContainerDied","Data":"6e67353e8a39d519cf4269a9771ca30ae4c8d30443c293283645c96cf02f2776"} Mar 20 16:06:04 crc kubenswrapper[4730]: I0320 16:06:04.760588 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567046-8f6sv" Mar 20 16:06:04 crc kubenswrapper[4730]: I0320 16:06:04.902826 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2gp6\" (UniqueName: \"kubernetes.io/projected/8dacfdca-1b6e-4336-8089-722d36388128-kube-api-access-d2gp6\") pod \"8dacfdca-1b6e-4336-8089-722d36388128\" (UID: \"8dacfdca-1b6e-4336-8089-722d36388128\") " Mar 20 16:06:04 crc kubenswrapper[4730]: I0320 16:06:04.908215 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dacfdca-1b6e-4336-8089-722d36388128-kube-api-access-d2gp6" (OuterVolumeSpecName: "kube-api-access-d2gp6") pod "8dacfdca-1b6e-4336-8089-722d36388128" (UID: "8dacfdca-1b6e-4336-8089-722d36388128"). InnerVolumeSpecName "kube-api-access-d2gp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:06:05 crc kubenswrapper[4730]: I0320 16:06:05.005919 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2gp6\" (UniqueName: \"kubernetes.io/projected/8dacfdca-1b6e-4336-8089-722d36388128-kube-api-access-d2gp6\") on node \"crc\" DevicePath \"\"" Mar 20 16:06:05 crc kubenswrapper[4730]: I0320 16:06:05.382590 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567046-8f6sv" event={"ID":"8dacfdca-1b6e-4336-8089-722d36388128","Type":"ContainerDied","Data":"b73bb10ec72b32b9bfcb5be49e347491430ce3cef3e789d130da7b549ea69c8b"} Mar 20 16:06:05 crc kubenswrapper[4730]: I0320 16:06:05.382642 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b73bb10ec72b32b9bfcb5be49e347491430ce3cef3e789d130da7b549ea69c8b" Mar 20 16:06:05 crc kubenswrapper[4730]: I0320 16:06:05.382654 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567046-8f6sv" Mar 20 16:06:05 crc kubenswrapper[4730]: I0320 16:06:05.823930 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567040-2zl4f"] Mar 20 16:06:05 crc kubenswrapper[4730]: I0320 16:06:05.833176 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567040-2zl4f"] Mar 20 16:06:07 crc kubenswrapper[4730]: I0320 16:06:07.545171 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97b63a10-b572-4a37-a2a4-079852aa2d3d" path="/var/lib/kubelet/pods/97b63a10-b572-4a37-a2a4-079852aa2d3d/volumes" Mar 20 16:06:46 crc kubenswrapper[4730]: I0320 16:06:46.644168 4730 scope.go:117] "RemoveContainer" containerID="347fe11ee7c05acba952c1a21fa83ca176c9f921071221e9dbdf6170682cd003" Mar 20 16:06:46 crc kubenswrapper[4730]: I0320 16:06:46.702431 4730 scope.go:117] "RemoveContainer" containerID="89ef3de4f8d5002494a53f05fdcc4fa61cfc7cf388b35f48076aa3b98fc5e176" Mar 20 16:06:46 crc kubenswrapper[4730]: I0320 16:06:46.743606 4730 scope.go:117] "RemoveContainer" containerID="859fbb7a55a48ffe4a6d03732d3cc6088c3d226367ded637f27bf24936c41dba" Mar 20 16:07:42 crc kubenswrapper[4730]: I0320 16:07:42.881950 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:07:42 crc kubenswrapper[4730]: I0320 16:07:42.882455 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:08:00 crc kubenswrapper[4730]: I0320 16:08:00.163959 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567048-nhd7d"] Mar 20 16:08:00 crc kubenswrapper[4730]: E0320 16:08:00.166763 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dacfdca-1b6e-4336-8089-722d36388128" containerName="oc" Mar 20 16:08:00 crc kubenswrapper[4730]: I0320 16:08:00.166907 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dacfdca-1b6e-4336-8089-722d36388128" containerName="oc" Mar 20 16:08:00 crc kubenswrapper[4730]: I0320 16:08:00.167325 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dacfdca-1b6e-4336-8089-722d36388128" containerName="oc" Mar 20 16:08:00 crc kubenswrapper[4730]: I0320 16:08:00.168431 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567048-nhd7d" Mar 20 16:08:00 crc kubenswrapper[4730]: I0320 16:08:00.173553 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:08:00 crc kubenswrapper[4730]: I0320 16:08:00.175685 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:08:00 crc kubenswrapper[4730]: I0320 16:08:00.176078 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:08:00 crc kubenswrapper[4730]: I0320 16:08:00.180567 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567048-nhd7d"] Mar 20 16:08:00 crc kubenswrapper[4730]: I0320 16:08:00.202115 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49zn6\" (UniqueName: \"kubernetes.io/projected/28bea13e-dd2a-4ecf-9182-cc639a47c75f-kube-api-access-49zn6\") pod \"auto-csr-approver-29567048-nhd7d\" (UID: \"28bea13e-dd2a-4ecf-9182-cc639a47c75f\") " pod="openshift-infra/auto-csr-approver-29567048-nhd7d" Mar 20 16:08:00 crc kubenswrapper[4730]: I0320 16:08:00.304266 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49zn6\" (UniqueName: \"kubernetes.io/projected/28bea13e-dd2a-4ecf-9182-cc639a47c75f-kube-api-access-49zn6\") pod \"auto-csr-approver-29567048-nhd7d\" (UID: \"28bea13e-dd2a-4ecf-9182-cc639a47c75f\") " pod="openshift-infra/auto-csr-approver-29567048-nhd7d" Mar 20 16:08:00 crc kubenswrapper[4730]: I0320 16:08:00.322563 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49zn6\" (UniqueName: \"kubernetes.io/projected/28bea13e-dd2a-4ecf-9182-cc639a47c75f-kube-api-access-49zn6\") pod \"auto-csr-approver-29567048-nhd7d\" (UID: \"28bea13e-dd2a-4ecf-9182-cc639a47c75f\") " pod="openshift-infra/auto-csr-approver-29567048-nhd7d" Mar 20 16:08:00 crc kubenswrapper[4730]: I0320 16:08:00.510228 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567048-nhd7d" Mar 20 16:08:01 crc kubenswrapper[4730]: I0320 16:08:00.998319 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567048-nhd7d"] Mar 20 16:08:01 crc kubenswrapper[4730]: I0320 16:08:01.640653 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567048-nhd7d" event={"ID":"28bea13e-dd2a-4ecf-9182-cc639a47c75f","Type":"ContainerStarted","Data":"17819f0755f55e41f14d87759253422fcd8742a0faa8a7cc476188b2acca2de7"} Mar 20 16:08:02 crc kubenswrapper[4730]: I0320 16:08:02.652702 4730 generic.go:334] "Generic (PLEG): container finished" podID="28bea13e-dd2a-4ecf-9182-cc639a47c75f" containerID="3781eddb5c4e3f16097f248108ceebb43195728c87b3ad6512e75bc75dffb2bb" exitCode=0 Mar 20 16:08:02 crc kubenswrapper[4730]: I0320 16:08:02.652754 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567048-nhd7d" event={"ID":"28bea13e-dd2a-4ecf-9182-cc639a47c75f","Type":"ContainerDied","Data":"3781eddb5c4e3f16097f248108ceebb43195728c87b3ad6512e75bc75dffb2bb"} Mar 20 16:08:04 crc kubenswrapper[4730]: I0320 16:08:04.098655 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567048-nhd7d" Mar 20 16:08:04 crc kubenswrapper[4730]: I0320 16:08:04.193622 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49zn6\" (UniqueName: \"kubernetes.io/projected/28bea13e-dd2a-4ecf-9182-cc639a47c75f-kube-api-access-49zn6\") pod \"28bea13e-dd2a-4ecf-9182-cc639a47c75f\" (UID: \"28bea13e-dd2a-4ecf-9182-cc639a47c75f\") " Mar 20 16:08:04 crc kubenswrapper[4730]: I0320 16:08:04.198550 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28bea13e-dd2a-4ecf-9182-cc639a47c75f-kube-api-access-49zn6" (OuterVolumeSpecName: "kube-api-access-49zn6") pod "28bea13e-dd2a-4ecf-9182-cc639a47c75f" (UID: "28bea13e-dd2a-4ecf-9182-cc639a47c75f"). InnerVolumeSpecName "kube-api-access-49zn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:08:04 crc kubenswrapper[4730]: I0320 16:08:04.295993 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49zn6\" (UniqueName: \"kubernetes.io/projected/28bea13e-dd2a-4ecf-9182-cc639a47c75f-kube-api-access-49zn6\") on node \"crc\" DevicePath \"\"" Mar 20 16:08:04 crc kubenswrapper[4730]: I0320 16:08:04.680239 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567048-nhd7d" event={"ID":"28bea13e-dd2a-4ecf-9182-cc639a47c75f","Type":"ContainerDied","Data":"17819f0755f55e41f14d87759253422fcd8742a0faa8a7cc476188b2acca2de7"} Mar 20 16:08:04 crc kubenswrapper[4730]: I0320 16:08:04.680317 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17819f0755f55e41f14d87759253422fcd8742a0faa8a7cc476188b2acca2de7" Mar 20 16:08:04 crc kubenswrapper[4730]: I0320 16:08:04.680387 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567048-nhd7d" Mar 20 16:08:05 crc kubenswrapper[4730]: I0320 16:08:05.179407 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567042-rngpc"] Mar 20 16:08:05 crc kubenswrapper[4730]: I0320 16:08:05.189188 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567042-rngpc"] Mar 20 16:08:05 crc kubenswrapper[4730]: I0320 16:08:05.543536 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f1785c3-01b9-48cd-bfc9-c0fdb1c18455" path="/var/lib/kubelet/pods/8f1785c3-01b9-48cd-bfc9-c0fdb1c18455/volumes" Mar 20 16:08:10 crc kubenswrapper[4730]: I0320 16:08:10.759137 4730 generic.go:334] "Generic (PLEG): container finished" podID="73c1c649-4459-497e-ba5b-245a4eb5ad04" containerID="70f50b402151af161dd2f676f7cb4e3b396dcdbae945eab08ae85bd1aeb89352" exitCode=0 Mar 20 16:08:10 crc kubenswrapper[4730]: I0320 16:08:10.759277 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh" event={"ID":"73c1c649-4459-497e-ba5b-245a4eb5ad04","Type":"ContainerDied","Data":"70f50b402151af161dd2f676f7cb4e3b396dcdbae945eab08ae85bd1aeb89352"} Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.191517 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh" Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.380703 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73c1c649-4459-497e-ba5b-245a4eb5ad04-ssh-key-openstack-edpm-ipam\") pod \"73c1c649-4459-497e-ba5b-245a4eb5ad04\" (UID: \"73c1c649-4459-497e-ba5b-245a4eb5ad04\") " Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.380755 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c1c649-4459-497e-ba5b-245a4eb5ad04-bootstrap-combined-ca-bundle\") pod \"73c1c649-4459-497e-ba5b-245a4eb5ad04\" (UID: \"73c1c649-4459-497e-ba5b-245a4eb5ad04\") " Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.380820 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73c1c649-4459-497e-ba5b-245a4eb5ad04-inventory\") pod \"73c1c649-4459-497e-ba5b-245a4eb5ad04\" (UID: \"73c1c649-4459-497e-ba5b-245a4eb5ad04\") " Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.380890 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nc4zf\" (UniqueName: \"kubernetes.io/projected/73c1c649-4459-497e-ba5b-245a4eb5ad04-kube-api-access-nc4zf\") pod \"73c1c649-4459-497e-ba5b-245a4eb5ad04\" (UID: \"73c1c649-4459-497e-ba5b-245a4eb5ad04\") " Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.394306 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73c1c649-4459-497e-ba5b-245a4eb5ad04-kube-api-access-nc4zf" (OuterVolumeSpecName: "kube-api-access-nc4zf") pod "73c1c649-4459-497e-ba5b-245a4eb5ad04" (UID: "73c1c649-4459-497e-ba5b-245a4eb5ad04"). InnerVolumeSpecName "kube-api-access-nc4zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.394465 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c1c649-4459-497e-ba5b-245a4eb5ad04-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "73c1c649-4459-497e-ba5b-245a4eb5ad04" (UID: "73c1c649-4459-497e-ba5b-245a4eb5ad04"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.416124 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c1c649-4459-497e-ba5b-245a4eb5ad04-inventory" (OuterVolumeSpecName: "inventory") pod "73c1c649-4459-497e-ba5b-245a4eb5ad04" (UID: "73c1c649-4459-497e-ba5b-245a4eb5ad04"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.422021 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c1c649-4459-497e-ba5b-245a4eb5ad04-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "73c1c649-4459-497e-ba5b-245a4eb5ad04" (UID: "73c1c649-4459-497e-ba5b-245a4eb5ad04"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.483765 4730 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73c1c649-4459-497e-ba5b-245a4eb5ad04-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.483806 4730 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c1c649-4459-497e-ba5b-245a4eb5ad04-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.483819 4730 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73c1c649-4459-497e-ba5b-245a4eb5ad04-inventory\") on node \"crc\" DevicePath \"\"" Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.483831 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nc4zf\" (UniqueName: \"kubernetes.io/projected/73c1c649-4459-497e-ba5b-245a4eb5ad04-kube-api-access-nc4zf\") on node \"crc\" DevicePath \"\"" Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.783975 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh" event={"ID":"73c1c649-4459-497e-ba5b-245a4eb5ad04","Type":"ContainerDied","Data":"cd8b0f41c6bc95bfee5c3d1ca2c479defe33b5a42deb27db28effb10edd49695"} Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.784024 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd8b0f41c6bc95bfee5c3d1ca2c479defe33b5a42deb27db28effb10edd49695" Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.784022 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh" Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.879925 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.879997 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.888091 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz"] Mar 20 16:08:12 crc kubenswrapper[4730]: E0320 16:08:12.888953 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73c1c649-4459-497e-ba5b-245a4eb5ad04" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.889097 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="73c1c649-4459-497e-ba5b-245a4eb5ad04" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 20 16:08:12 crc kubenswrapper[4730]: E0320 16:08:12.889239 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28bea13e-dd2a-4ecf-9182-cc639a47c75f" containerName="oc" Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.889331 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="28bea13e-dd2a-4ecf-9182-cc639a47c75f" containerName="oc" Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.889866 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="28bea13e-dd2a-4ecf-9182-cc639a47c75f" containerName="oc" Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.889979 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="73c1c649-4459-497e-ba5b-245a4eb5ad04" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.890943 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz" Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.894597 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.894701 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.894988 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvsxx" Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.895310 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.899427 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz"] Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.991904 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/962231f7-41b6-4754-b63c-523277f7cf50-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz\" (UID: \"962231f7-41b6-4754-b63c-523277f7cf50\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz" Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.992680 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/962231f7-41b6-4754-b63c-523277f7cf50-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz\" (UID: \"962231f7-41b6-4754-b63c-523277f7cf50\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz" Mar 20 16:08:12 crc kubenswrapper[4730]: I0320 16:08:12.993072 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw628\" (UniqueName: \"kubernetes.io/projected/962231f7-41b6-4754-b63c-523277f7cf50-kube-api-access-sw628\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz\" (UID: \"962231f7-41b6-4754-b63c-523277f7cf50\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz" Mar 20 16:08:13 crc kubenswrapper[4730]: I0320 16:08:13.097847 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/962231f7-41b6-4754-b63c-523277f7cf50-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz\" (UID: \"962231f7-41b6-4754-b63c-523277f7cf50\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz" Mar 20 16:08:13 crc kubenswrapper[4730]: I0320 16:08:13.098054 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/962231f7-41b6-4754-b63c-523277f7cf50-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz\" (UID: \"962231f7-41b6-4754-b63c-523277f7cf50\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz" Mar 20 16:08:13 crc kubenswrapper[4730]: I0320 16:08:13.098121 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw628\" (UniqueName: \"kubernetes.io/projected/962231f7-41b6-4754-b63c-523277f7cf50-kube-api-access-sw628\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz\" (UID: \"962231f7-41b6-4754-b63c-523277f7cf50\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz" Mar 20 16:08:13 crc kubenswrapper[4730]: I0320 16:08:13.114957 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/962231f7-41b6-4754-b63c-523277f7cf50-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz\" (UID: \"962231f7-41b6-4754-b63c-523277f7cf50\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz" Mar 20 16:08:13 crc kubenswrapper[4730]: I0320 16:08:13.120074 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/962231f7-41b6-4754-b63c-523277f7cf50-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz\" (UID: \"962231f7-41b6-4754-b63c-523277f7cf50\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz" Mar 20 16:08:13 crc kubenswrapper[4730]: I0320 16:08:13.125054 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw628\" (UniqueName: \"kubernetes.io/projected/962231f7-41b6-4754-b63c-523277f7cf50-kube-api-access-sw628\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz\" (UID: \"962231f7-41b6-4754-b63c-523277f7cf50\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz" Mar 20 16:08:13 crc kubenswrapper[4730]: I0320 16:08:13.212809 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz" Mar 20 16:08:13 crc kubenswrapper[4730]: I0320 16:08:13.597662 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz"] Mar 20 16:08:13 crc kubenswrapper[4730]: I0320 16:08:13.793751 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz" event={"ID":"962231f7-41b6-4754-b63c-523277f7cf50","Type":"ContainerStarted","Data":"65f684006305344a33bcb883887a7102383b145f70b18f1bff5162dff68a6183"} Mar 20 16:08:14 crc kubenswrapper[4730]: I0320 16:08:14.804481 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz" event={"ID":"962231f7-41b6-4754-b63c-523277f7cf50","Type":"ContainerStarted","Data":"23ee7694e655d68f9d7ceabeb66817e16ee241dd678d47fcd8c97daf31ec82f5"} Mar 20 16:08:14 crc kubenswrapper[4730]: I0320 16:08:14.835781 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz" podStartSLOduration=2.258339066 podStartE2EDuration="2.835755921s" podCreationTimestamp="2026-03-20 16:08:12 +0000 UTC" firstStartedPulling="2026-03-20 16:08:13.602707062 +0000 UTC m=+1752.816078441" lastFinishedPulling="2026-03-20 16:08:14.180123937 +0000 UTC m=+1753.393495296" observedRunningTime="2026-03-20 16:08:14.829140228 +0000 UTC m=+1754.042511607" watchObservedRunningTime="2026-03-20 16:08:14.835755921 +0000 UTC m=+1754.049127310" Mar 20 16:08:42 crc kubenswrapper[4730]: I0320 16:08:42.879946 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:08:42 crc kubenswrapper[4730]: I0320 16:08:42.880566 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:08:42 crc kubenswrapper[4730]: I0320 16:08:42.880638 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 16:08:42 crc kubenswrapper[4730]: I0320 16:08:42.881543 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120"} pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 16:08:42 crc kubenswrapper[4730]: I0320 16:08:42.881606 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" containerID="cri-o://4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120" gracePeriod=600 Mar 20 16:08:43 crc kubenswrapper[4730]: E0320 16:08:43.012148 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:08:43 crc kubenswrapper[4730]: I0320 16:08:43.119714 4730 generic.go:334] "Generic (PLEG): container finished" podID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerID="4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120" exitCode=0 Mar 20 16:08:43 crc kubenswrapper[4730]: I0320 16:08:43.119760 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerDied","Data":"4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120"} Mar 20 16:08:43 crc kubenswrapper[4730]: I0320 16:08:43.119800 4730 scope.go:117] "RemoveContainer" containerID="fb7cef3383bd559653e29a00e754a8c3366946a9ed7a655b7b70a7214aec8143" Mar 20 16:08:43 crc kubenswrapper[4730]: I0320 16:08:43.120647 4730 scope.go:117] "RemoveContainer" containerID="4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120" Mar 20 16:08:43 crc kubenswrapper[4730]: E0320 16:08:43.120943 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:08:46 crc kubenswrapper[4730]: I0320 16:08:46.882672 4730 scope.go:117] "RemoveContainer" containerID="807d658a9e1c791073ad6dce59cf86eec477c7d4420c8a363f99c8986963ad00" Mar 20 16:08:55 crc kubenswrapper[4730]: I0320 16:08:55.533856 4730 scope.go:117] "RemoveContainer" containerID="4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120" Mar 20 16:08:55 crc kubenswrapper[4730]: E0320 16:08:55.534637 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:08:57 crc kubenswrapper[4730]: I0320 16:08:57.079931 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e285-account-create-update-6wk66"] Mar 20 16:08:57 crc kubenswrapper[4730]: I0320 16:08:57.093789 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-bjqvh"] Mar 20 16:08:57 crc kubenswrapper[4730]: I0320 16:08:57.101888 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-e285-account-create-update-6wk66"] Mar 20 16:08:57 crc kubenswrapper[4730]: I0320 16:08:57.109463 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-bjqvh"] Mar 20 16:08:57 crc kubenswrapper[4730]: I0320 16:08:57.546124 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16da1663-821b-4e05-95f6-df67e9fac962" path="/var/lib/kubelet/pods/16da1663-821b-4e05-95f6-df67e9fac962/volumes" Mar 20 16:08:57 crc kubenswrapper[4730]: I0320 16:08:57.546882 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17c0870c-17e5-4bd4-91b1-a8df134a4fbd" path="/var/lib/kubelet/pods/17c0870c-17e5-4bd4-91b1-a8df134a4fbd/volumes" Mar 20 16:08:59 crc kubenswrapper[4730]: I0320 16:08:59.046828 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-fvknw"] Mar 20 16:08:59 crc kubenswrapper[4730]: I0320 16:08:59.063978 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-fvknw"] Mar 20 16:08:59 crc kubenswrapper[4730]: I0320 16:08:59.072954 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-n9vdf"] Mar 20 16:08:59 crc kubenswrapper[4730]: I0320 16:08:59.082891 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db41-account-create-update-x7l2w"] Mar 20 16:08:59 crc kubenswrapper[4730]: I0320 16:08:59.091045 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-x4h5x"] Mar 20 16:08:59 crc kubenswrapper[4730]: I0320 16:08:59.099199 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c5a1-account-create-update-hfdmn"] Mar 20 16:08:59 crc kubenswrapper[4730]: I0320 16:08:59.111575 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-n9vdf"] Mar 20 16:08:59 crc kubenswrapper[4730]: I0320 16:08:59.122255 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-x4h5x"] Mar 20 16:08:59 crc kubenswrapper[4730]: I0320 16:08:59.135456 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db41-account-create-update-x7l2w"] Mar 20 16:08:59 crc kubenswrapper[4730]: I0320 16:08:59.145757 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-c5a1-account-create-update-hfdmn"] Mar 20 16:08:59 crc kubenswrapper[4730]: I0320 16:08:59.548055 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3198c781-92f7-40f1-9b6e-ed5310febe0b" path="/var/lib/kubelet/pods/3198c781-92f7-40f1-9b6e-ed5310febe0b/volumes" Mar 20 16:08:59 crc kubenswrapper[4730]: I0320 16:08:59.550802 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a532566c-ab86-4984-9212-1e48605d192b" path="/var/lib/kubelet/pods/a532566c-ab86-4984-9212-1e48605d192b/volumes" Mar 20 16:08:59 crc kubenswrapper[4730]: I0320 16:08:59.552450 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c40f368e-f905-465b-9af0-b0ecb753de79" path="/var/lib/kubelet/pods/c40f368e-f905-465b-9af0-b0ecb753de79/volumes" Mar 20 16:08:59 crc kubenswrapper[4730]: I0320 16:08:59.553796 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7b436cd-ff29-4a9f-9e58-4c8760b1e012" path="/var/lib/kubelet/pods/c7b436cd-ff29-4a9f-9e58-4c8760b1e012/volumes" Mar 20 16:08:59 crc kubenswrapper[4730]: I0320 16:08:59.554531 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef40906b-a3dc-45b8-8bde-dd06eaaef85c" path="/var/lib/kubelet/pods/ef40906b-a3dc-45b8-8bde-dd06eaaef85c/volumes" Mar 20 16:09:00 crc kubenswrapper[4730]: I0320 16:09:00.081390 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-87f2-account-create-update-lblc4"] Mar 20 16:09:00 crc kubenswrapper[4730]: I0320 16:09:00.099058 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-87f2-account-create-update-lblc4"] Mar 20 16:09:01 crc kubenswrapper[4730]: I0320 16:09:01.551237 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a132fe19-9294-49c6-9b1e-fe3eed7f4bae" path="/var/lib/kubelet/pods/a132fe19-9294-49c6-9b1e-fe3eed7f4bae/volumes" Mar 20 16:09:07 crc kubenswrapper[4730]: I0320 16:09:07.533766 4730 scope.go:117] "RemoveContainer" containerID="4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120" Mar 20 16:09:07 crc kubenswrapper[4730]: E0320 16:09:07.534868 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:09:19 crc kubenswrapper[4730]: I0320 16:09:19.534678 4730 scope.go:117] "RemoveContainer" containerID="4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120" Mar 20 16:09:19 crc kubenswrapper[4730]: E0320 16:09:19.535917 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:09:20 crc kubenswrapper[4730]: I0320 16:09:20.042793 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-d92d8"] Mar 20 16:09:20 crc kubenswrapper[4730]: I0320 16:09:20.064794 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-d92d8"] Mar 20 16:09:21 crc kubenswrapper[4730]: I0320 16:09:21.559580 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed167127-4e44-4877-bf9b-dbb6a23a8b3f" path="/var/lib/kubelet/pods/ed167127-4e44-4877-bf9b-dbb6a23a8b3f/volumes" Mar 20 16:09:24 crc kubenswrapper[4730]: I0320 16:09:24.044468 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-mkvv4"] Mar 20 16:09:24 crc kubenswrapper[4730]: I0320 16:09:24.057569 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-mkvv4"] Mar 20 16:09:25 crc kubenswrapper[4730]: I0320 16:09:25.549169 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37dd8777-c196-4db2-af7a-5560a939e02c" path="/var/lib/kubelet/pods/37dd8777-c196-4db2-af7a-5560a939e02c/volumes" Mar 20 16:09:31 crc kubenswrapper[4730]: I0320 16:09:31.539186 4730 scope.go:117] "RemoveContainer" containerID="4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120" Mar 20 16:09:31 crc kubenswrapper[4730]: E0320 16:09:31.540157 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:09:43 crc kubenswrapper[4730]: I0320 16:09:43.052413 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-9q2kz"] Mar 20 16:09:43 crc kubenswrapper[4730]: I0320 16:09:43.065297 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-9f59-account-create-update-vmg5j"] Mar 20 16:09:43 crc kubenswrapper[4730]: I0320 16:09:43.073936 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-87csx"] Mar 20 16:09:43 crc kubenswrapper[4730]: I0320 16:09:43.082875 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-9q2kz"] Mar 20 16:09:43 crc kubenswrapper[4730]: I0320 16:09:43.091322 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-87csx"] Mar 20 16:09:43 crc kubenswrapper[4730]: I0320 16:09:43.100280 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-9f59-account-create-update-vmg5j"] Mar 20 16:09:43 crc kubenswrapper[4730]: I0320 16:09:43.549080 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44a72513-75fb-4b7e-912b-d28fa63d050a" path="/var/lib/kubelet/pods/44a72513-75fb-4b7e-912b-d28fa63d050a/volumes" Mar 20 16:09:43 crc kubenswrapper[4730]: I0320 16:09:43.550711 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6118ed31-b8d7-4a7c-8769-69d996d26915" path="/var/lib/kubelet/pods/6118ed31-b8d7-4a7c-8769-69d996d26915/volumes" Mar 20 16:09:43 crc kubenswrapper[4730]: I0320 16:09:43.552233 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad93c0a8-34d6-4fee-985c-7c7307f00c0c" path="/var/lib/kubelet/pods/ad93c0a8-34d6-4fee-985c-7c7307f00c0c/volumes" Mar 20 16:09:44 crc kubenswrapper[4730]: I0320 16:09:44.035790 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-3959-account-create-update-qxd89"] Mar 20 16:09:44 crc kubenswrapper[4730]: I0320 16:09:44.047890 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-3959-account-create-update-qxd89"] Mar 20 16:09:44 crc kubenswrapper[4730]: I0320 16:09:44.534107 4730 scope.go:117] "RemoveContainer" containerID="4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120" Mar 20 16:09:44 crc kubenswrapper[4730]: E0320 16:09:44.535066 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:09:45 crc kubenswrapper[4730]: I0320 16:09:45.056380 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-qpb6s"] Mar 20 16:09:45 crc kubenswrapper[4730]: I0320 16:09:45.066488 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-qpb6s"] Mar 20 16:09:45 crc kubenswrapper[4730]: I0320 16:09:45.547501 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c94c6d8-4c40-455a-a536-7c64e3838986" path="/var/lib/kubelet/pods/1c94c6d8-4c40-455a-a536-7c64e3838986/volumes" Mar 20 16:09:45 crc kubenswrapper[4730]: I0320 16:09:45.548740 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e01c2575-5301-494a-bf47-9a6053de9c64" path="/var/lib/kubelet/pods/e01c2575-5301-494a-bf47-9a6053de9c64/volumes" Mar 20 16:09:46 crc kubenswrapper[4730]: I0320 16:09:46.957454 4730 scope.go:117] "RemoveContainer" containerID="67c549d0aa6a1c0db14f97c3aff414699de48b304aa4c5c416c420aae8bc31a7" Mar 20 16:09:46 crc kubenswrapper[4730]: I0320 16:09:46.987412 4730 scope.go:117] "RemoveContainer" containerID="2857b9eca0093dd961d059808f0936df2d938583bfd861b80e003896a914c165" Mar 20 16:09:47 crc kubenswrapper[4730]: I0320 16:09:47.063920 4730 scope.go:117] "RemoveContainer" containerID="a18801b5a50e28a1d043f07d02846b12496eaa787cc63d296052b7f86700e382" Mar 20 16:09:47 crc kubenswrapper[4730]: I0320 16:09:47.098680 4730 scope.go:117] "RemoveContainer" containerID="1742d4d5f625e20265689269ef8d4a8b9f9546ddd3978d31dffe002e4353d662" Mar 20 16:09:47 crc kubenswrapper[4730]: I0320 16:09:47.143704 4730 scope.go:117] "RemoveContainer" containerID="40e0babc7b2f63017ce242ba014b4798c26ae0c66070098c86ad2de5a7400e6c" Mar 20 16:09:47 crc kubenswrapper[4730]: I0320 16:09:47.186162 4730 scope.go:117] "RemoveContainer" containerID="ae717d458b43c41d279b4f17419574a7ba6d139ccd8581e792b75559eb5cba0c" Mar 20 16:09:47 crc kubenswrapper[4730]: I0320 16:09:47.228949 4730 scope.go:117] "RemoveContainer" containerID="a8ebba1aa3aefe2f2a84695ac23d42f8c9788cfc46f63bc2e9dead733c7274ec" Mar 20 16:09:47 crc kubenswrapper[4730]: I0320 16:09:47.254842 4730 scope.go:117] "RemoveContainer" containerID="30a5fc8a5ea71396f4de5cb5ef85143858b4f15e175e2aea0d88617f137cddad" Mar 20 16:09:47 crc kubenswrapper[4730]: I0320 16:09:47.278349 4730 scope.go:117] "RemoveContainer" containerID="58cffe0b249055c3d576f3ea017f6ee1185d299a1b49ba134b1ea8fcf81d53bd" Mar 20 16:09:47 crc kubenswrapper[4730]: I0320 16:09:47.298813 4730 scope.go:117] "RemoveContainer" containerID="c35a1209cb7b3066725c4f8438840ab79db396745f3b69d5ee16580ca7ae88eb" Mar 20 16:09:47 crc kubenswrapper[4730]: I0320 16:09:47.320257 4730 scope.go:117] "RemoveContainer" containerID="619c70ff24e78ebd6137bc20c79ee2dc5949bf1cca622b03e9fc4227379e48f4" Mar 20 16:09:47 crc kubenswrapper[4730]: I0320 16:09:47.359008 4730 scope.go:117] "RemoveContainer" containerID="45a09c4f4bffe31b4f9cf83737f4a3331b9ba65b3e4bbf1a00d15070f2dd1fbb" Mar 20 16:09:47 crc kubenswrapper[4730]: I0320 16:09:47.392939 4730 scope.go:117] "RemoveContainer" containerID="b503fc415bca6f276d3faa0fabe6ea4e17e93d2815b320d70f62ffa635dc90fc" Mar 20 16:09:47 crc kubenswrapper[4730]: I0320 16:09:47.414511 4730 scope.go:117] "RemoveContainer" containerID="ecddce73fd871590be8e4104469454a63bf36c8d5e335fcb1236e7e17748fcf3" Mar 20 16:09:47 crc kubenswrapper[4730]: I0320 16:09:47.435886 4730 scope.go:117] "RemoveContainer" containerID="6f4ac67e084527a1cb38bd3c525c24e61f9c884d533f00e2d72554c431fbb247" Mar 20 16:09:48 crc kubenswrapper[4730]: I0320 16:09:48.034449 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c423-account-create-update-dcjc2"] Mar 20 16:09:48 crc kubenswrapper[4730]: I0320 16:09:48.048521 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c423-account-create-update-dcjc2"] Mar 20 16:09:49 crc kubenswrapper[4730]: I0320 16:09:49.544187 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06e5575b-c67a-46fe-8502-efc341523de2" path="/var/lib/kubelet/pods/06e5575b-c67a-46fe-8502-efc341523de2/volumes" Mar 20 16:09:56 crc kubenswrapper[4730]: I0320 16:09:56.534689 4730 scope.go:117] "RemoveContainer" containerID="4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120" Mar 20 16:09:56 crc kubenswrapper[4730]: E0320 16:09:56.535661 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:09:58 crc kubenswrapper[4730]: I0320 16:09:58.030623 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-ns6b5"] Mar 20 16:09:58 crc kubenswrapper[4730]: I0320 16:09:58.040911 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-ns6b5"] Mar 20 16:09:59 crc kubenswrapper[4730]: I0320 16:09:59.038524 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-rb4pw"] Mar 20 16:09:59 crc kubenswrapper[4730]: I0320 16:09:59.049318 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-rb4pw"] Mar 20 16:09:59 crc kubenswrapper[4730]: I0320 16:09:59.543184 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92a7eed8-de7c-4816-8bd9-e922ace376ad" path="/var/lib/kubelet/pods/92a7eed8-de7c-4816-8bd9-e922ace376ad/volumes" Mar 20 16:09:59 crc kubenswrapper[4730]: I0320 16:09:59.543814 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9577f66b-a45e-4d51-9d87-4ae757819182" path="/var/lib/kubelet/pods/9577f66b-a45e-4d51-9d87-4ae757819182/volumes" Mar 20 16:10:00 crc kubenswrapper[4730]: I0320 16:10:00.165774 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567050-7gqln"] Mar 20 16:10:00 crc kubenswrapper[4730]: I0320 16:10:00.167861 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567050-7gqln" Mar 20 16:10:00 crc kubenswrapper[4730]: I0320 16:10:00.170377 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:10:00 crc kubenswrapper[4730]: I0320 16:10:00.170378 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:10:00 crc kubenswrapper[4730]: I0320 16:10:00.171015 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:10:00 crc kubenswrapper[4730]: I0320 16:10:00.178602 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567050-7gqln"] Mar 20 16:10:00 crc kubenswrapper[4730]: I0320 16:10:00.211606 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ck85\" (UniqueName: \"kubernetes.io/projected/9d689cf2-4142-40fd-9af3-13b98b99296d-kube-api-access-4ck85\") pod \"auto-csr-approver-29567050-7gqln\" (UID: \"9d689cf2-4142-40fd-9af3-13b98b99296d\") " pod="openshift-infra/auto-csr-approver-29567050-7gqln" Mar 20 16:10:00 crc kubenswrapper[4730]: I0320 16:10:00.314131 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ck85\" (UniqueName: \"kubernetes.io/projected/9d689cf2-4142-40fd-9af3-13b98b99296d-kube-api-access-4ck85\") pod \"auto-csr-approver-29567050-7gqln\" (UID: \"9d689cf2-4142-40fd-9af3-13b98b99296d\") " pod="openshift-infra/auto-csr-approver-29567050-7gqln" Mar 20 16:10:00 crc kubenswrapper[4730]: I0320 16:10:00.334494 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ck85\" (UniqueName: \"kubernetes.io/projected/9d689cf2-4142-40fd-9af3-13b98b99296d-kube-api-access-4ck85\") pod \"auto-csr-approver-29567050-7gqln\" (UID: \"9d689cf2-4142-40fd-9af3-13b98b99296d\") " pod="openshift-infra/auto-csr-approver-29567050-7gqln" Mar 20 16:10:00 crc kubenswrapper[4730]: I0320 16:10:00.512387 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567050-7gqln" Mar 20 16:10:00 crc kubenswrapper[4730]: W0320 16:10:00.987036 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d689cf2_4142_40fd_9af3_13b98b99296d.slice/crio-38edbd01203ebf1ef4afe23c08185596756aa89c1057d140329631cc21b6dd74 WatchSource:0}: Error finding container 38edbd01203ebf1ef4afe23c08185596756aa89c1057d140329631cc21b6dd74: Status 404 returned error can't find the container with id 38edbd01203ebf1ef4afe23c08185596756aa89c1057d140329631cc21b6dd74 Mar 20 16:10:00 crc kubenswrapper[4730]: I0320 16:10:00.989302 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 16:10:01 crc kubenswrapper[4730]: I0320 16:10:01.000321 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567050-7gqln"] Mar 20 16:10:01 crc kubenswrapper[4730]: I0320 16:10:01.998910 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567050-7gqln" event={"ID":"9d689cf2-4142-40fd-9af3-13b98b99296d","Type":"ContainerStarted","Data":"38edbd01203ebf1ef4afe23c08185596756aa89c1057d140329631cc21b6dd74"} Mar 20 16:10:03 crc kubenswrapper[4730]: I0320 16:10:03.009188 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567050-7gqln" event={"ID":"9d689cf2-4142-40fd-9af3-13b98b99296d","Type":"ContainerStarted","Data":"41dec27fbddb23dabfba3fbf070ca912d2703e72f2511ee9ef62aa8a4e09aa09"} Mar 20 16:10:03 crc kubenswrapper[4730]: I0320 16:10:03.036042 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29567050-7gqln" podStartSLOduration=1.451113883 podStartE2EDuration="3.036023731s" podCreationTimestamp="2026-03-20 16:10:00 +0000 UTC" firstStartedPulling="2026-03-20 16:10:00.989085362 +0000 UTC m=+1860.202456721" lastFinishedPulling="2026-03-20 16:10:02.57399519 +0000 UTC m=+1861.787366569" observedRunningTime="2026-03-20 16:10:03.028920546 +0000 UTC m=+1862.242291915" watchObservedRunningTime="2026-03-20 16:10:03.036023731 +0000 UTC m=+1862.249395100" Mar 20 16:10:04 crc kubenswrapper[4730]: I0320 16:10:04.022202 4730 generic.go:334] "Generic (PLEG): container finished" podID="9d689cf2-4142-40fd-9af3-13b98b99296d" containerID="41dec27fbddb23dabfba3fbf070ca912d2703e72f2511ee9ef62aa8a4e09aa09" exitCode=0 Mar 20 16:10:04 crc kubenswrapper[4730]: I0320 16:10:04.022292 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567050-7gqln" event={"ID":"9d689cf2-4142-40fd-9af3-13b98b99296d","Type":"ContainerDied","Data":"41dec27fbddb23dabfba3fbf070ca912d2703e72f2511ee9ef62aa8a4e09aa09"} Mar 20 16:10:05 crc kubenswrapper[4730]: I0320 16:10:05.405178 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567050-7gqln" Mar 20 16:10:05 crc kubenswrapper[4730]: I0320 16:10:05.430195 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ck85\" (UniqueName: \"kubernetes.io/projected/9d689cf2-4142-40fd-9af3-13b98b99296d-kube-api-access-4ck85\") pod \"9d689cf2-4142-40fd-9af3-13b98b99296d\" (UID: \"9d689cf2-4142-40fd-9af3-13b98b99296d\") " Mar 20 16:10:05 crc kubenswrapper[4730]: I0320 16:10:05.436699 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d689cf2-4142-40fd-9af3-13b98b99296d-kube-api-access-4ck85" (OuterVolumeSpecName: "kube-api-access-4ck85") pod "9d689cf2-4142-40fd-9af3-13b98b99296d" (UID: "9d689cf2-4142-40fd-9af3-13b98b99296d"). InnerVolumeSpecName "kube-api-access-4ck85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:10:05 crc kubenswrapper[4730]: I0320 16:10:05.532405 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ck85\" (UniqueName: \"kubernetes.io/projected/9d689cf2-4142-40fd-9af3-13b98b99296d-kube-api-access-4ck85\") on node \"crc\" DevicePath \"\"" Mar 20 16:10:06 crc kubenswrapper[4730]: I0320 16:10:06.075767 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567050-7gqln" event={"ID":"9d689cf2-4142-40fd-9af3-13b98b99296d","Type":"ContainerDied","Data":"38edbd01203ebf1ef4afe23c08185596756aa89c1057d140329631cc21b6dd74"} Mar 20 16:10:06 crc kubenswrapper[4730]: I0320 16:10:06.075860 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38edbd01203ebf1ef4afe23c08185596756aa89c1057d140329631cc21b6dd74" Mar 20 16:10:06 crc kubenswrapper[4730]: I0320 16:10:06.075976 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567050-7gqln" Mar 20 16:10:06 crc kubenswrapper[4730]: I0320 16:10:06.106818 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567044-nw9nk"] Mar 20 16:10:06 crc kubenswrapper[4730]: I0320 16:10:06.118203 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567044-nw9nk"] Mar 20 16:10:07 crc kubenswrapper[4730]: I0320 16:10:07.550807 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44fa3d60-826d-4b59-b44a-0102f155b586" path="/var/lib/kubelet/pods/44fa3d60-826d-4b59-b44a-0102f155b586/volumes" Mar 20 16:10:09 crc kubenswrapper[4730]: I0320 16:10:09.533886 4730 scope.go:117] "RemoveContainer" containerID="4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120" Mar 20 16:10:09 crc kubenswrapper[4730]: E0320 16:10:09.534591 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:10:20 crc kubenswrapper[4730]: I0320 16:10:20.533327 4730 scope.go:117] "RemoveContainer" containerID="4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120" Mar 20 16:10:20 crc kubenswrapper[4730]: E0320 16:10:20.534330 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:10:32 crc kubenswrapper[4730]: I0320 16:10:32.064637 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-4kfmn"] Mar 20 16:10:32 crc kubenswrapper[4730]: I0320 16:10:32.077452 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-4kfmn"] Mar 20 16:10:33 crc kubenswrapper[4730]: I0320 16:10:33.534383 4730 scope.go:117] "RemoveContainer" containerID="4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120" Mar 20 16:10:33 crc kubenswrapper[4730]: E0320 16:10:33.535070 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:10:33 crc kubenswrapper[4730]: I0320 16:10:33.549045 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fedef548-ce31-47a2-92fc-911f167635f9" path="/var/lib/kubelet/pods/fedef548-ce31-47a2-92fc-911f167635f9/volumes" Mar 20 16:10:34 crc kubenswrapper[4730]: I0320 16:10:34.032838 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-x2t9r"] Mar 20 16:10:34 crc kubenswrapper[4730]: I0320 16:10:34.066021 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-x2t9r"] Mar 20 16:10:35 crc kubenswrapper[4730]: I0320 16:10:35.548649 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a05675d7-cd2f-4810-862b-cb0d2d13cbdd" path="/var/lib/kubelet/pods/a05675d7-cd2f-4810-862b-cb0d2d13cbdd/volumes" Mar 20 16:10:37 crc kubenswrapper[4730]: I0320 16:10:37.030828 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-tz6x7"] Mar 20 16:10:37 crc kubenswrapper[4730]: I0320 16:10:37.041234 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-tz6x7"] Mar 20 16:10:37 crc kubenswrapper[4730]: I0320 16:10:37.551799 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48fc8af0-e30f-4f3f-88d3-8b054c6359ef" path="/var/lib/kubelet/pods/48fc8af0-e30f-4f3f-88d3-8b054c6359ef/volumes" Mar 20 16:10:44 crc kubenswrapper[4730]: I0320 16:10:44.030689 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-z9mtx"] Mar 20 16:10:44 crc kubenswrapper[4730]: I0320 16:10:44.040443 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-hbplf"] Mar 20 16:10:44 crc kubenswrapper[4730]: I0320 16:10:44.048863 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-z9mtx"] Mar 20 16:10:44 crc kubenswrapper[4730]: I0320 16:10:44.059582 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-hbplf"] Mar 20 16:10:44 crc kubenswrapper[4730]: I0320 16:10:44.533839 4730 scope.go:117] "RemoveContainer" containerID="4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120" Mar 20 16:10:44 crc kubenswrapper[4730]: E0320 16:10:44.534300 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:10:45 crc kubenswrapper[4730]: I0320 16:10:45.554241 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09f27249-61fb-4e13-9eb9-9b804f256d81" path="/var/lib/kubelet/pods/09f27249-61fb-4e13-9eb9-9b804f256d81/volumes" Mar 20 16:10:45 crc kubenswrapper[4730]: I0320 16:10:45.555148 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fb4d42d-6cd9-480c-8ee0-1e168504a4cd" path="/var/lib/kubelet/pods/6fb4d42d-6cd9-480c-8ee0-1e168504a4cd/volumes" Mar 20 16:10:47 crc kubenswrapper[4730]: I0320 16:10:47.726575 4730 scope.go:117] "RemoveContainer" containerID="1763f714611816ce76b822616e2726ee2af2ec1d061896faecc0edc07186595f" Mar 20 16:10:47 crc kubenswrapper[4730]: I0320 16:10:47.766368 4730 scope.go:117] "RemoveContainer" containerID="3f4c141955a3579b06be021435ce1c3642e2a9b4483a932d05648a4559764229" Mar 20 16:10:47 crc kubenswrapper[4730]: I0320 16:10:47.844055 4730 scope.go:117] "RemoveContainer" containerID="29ed2b28b91aee9b1496fc9ae566fd345c663655b5eba7831621c42547aa8e83" Mar 20 16:10:47 crc kubenswrapper[4730]: I0320 16:10:47.888546 4730 scope.go:117] "RemoveContainer" containerID="32fe76fbff47bfdd3ed0a42b1fb587052917346b2dd9af6a6803fc8251d250e7" Mar 20 16:10:47 crc kubenswrapper[4730]: I0320 16:10:47.941961 4730 scope.go:117] "RemoveContainer" containerID="59a0ed1595de1b0849599bb5a7c10e7cfbb46ad061c13c2ab2d12fc1bc355373" Mar 20 16:10:47 crc kubenswrapper[4730]: I0320 16:10:47.974908 4730 scope.go:117] "RemoveContainer" containerID="99fba5e2cadd379521ca79369b155ec13b031c591917c4f1be4fc608956b6dda" Mar 20 16:10:48 crc kubenswrapper[4730]: I0320 16:10:48.020623 4730 scope.go:117] "RemoveContainer" containerID="2cbf92580c54611c192a57c093a66c2f77a3a73726fc9a21c3aef24b4e922f95" Mar 20 16:10:48 crc kubenswrapper[4730]: I0320 16:10:48.061197 4730 scope.go:117] "RemoveContainer" containerID="6f35041c9925accfe452d038ab9d3c1753f640407e6e4a51f0b4d6916cb04e6f" Mar 20 16:10:48 crc kubenswrapper[4730]: I0320 16:10:48.082515 4730 scope.go:117] "RemoveContainer" containerID="3ad79a5f57b1a4c7b377fb15d13f7708e0e00b53bbc48929b06820bc137a571e" Mar 20 16:10:59 crc kubenswrapper[4730]: I0320 16:10:59.533265 4730 scope.go:117] "RemoveContainer" containerID="4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120" Mar 20 16:10:59 crc kubenswrapper[4730]: E0320 16:10:59.534015 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:11:02 crc kubenswrapper[4730]: I0320 16:11:02.717633 4730 generic.go:334] "Generic (PLEG): container finished" podID="962231f7-41b6-4754-b63c-523277f7cf50" containerID="23ee7694e655d68f9d7ceabeb66817e16ee241dd678d47fcd8c97daf31ec82f5" exitCode=0 Mar 20 16:11:02 crc kubenswrapper[4730]: I0320 16:11:02.717730 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz" event={"ID":"962231f7-41b6-4754-b63c-523277f7cf50","Type":"ContainerDied","Data":"23ee7694e655d68f9d7ceabeb66817e16ee241dd678d47fcd8c97daf31ec82f5"} Mar 20 16:11:04 crc kubenswrapper[4730]: I0320 16:11:04.211826 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz" Mar 20 16:11:04 crc kubenswrapper[4730]: I0320 16:11:04.370203 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw628\" (UniqueName: \"kubernetes.io/projected/962231f7-41b6-4754-b63c-523277f7cf50-kube-api-access-sw628\") pod \"962231f7-41b6-4754-b63c-523277f7cf50\" (UID: \"962231f7-41b6-4754-b63c-523277f7cf50\") " Mar 20 16:11:04 crc kubenswrapper[4730]: I0320 16:11:04.370310 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/962231f7-41b6-4754-b63c-523277f7cf50-ssh-key-openstack-edpm-ipam\") pod \"962231f7-41b6-4754-b63c-523277f7cf50\" (UID: \"962231f7-41b6-4754-b63c-523277f7cf50\") " Mar 20 16:11:04 crc kubenswrapper[4730]: I0320 16:11:04.370348 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/962231f7-41b6-4754-b63c-523277f7cf50-inventory\") pod \"962231f7-41b6-4754-b63c-523277f7cf50\" (UID: \"962231f7-41b6-4754-b63c-523277f7cf50\") " Mar 20 16:11:04 crc kubenswrapper[4730]: I0320 16:11:04.381866 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/962231f7-41b6-4754-b63c-523277f7cf50-kube-api-access-sw628" (OuterVolumeSpecName: "kube-api-access-sw628") pod "962231f7-41b6-4754-b63c-523277f7cf50" (UID: "962231f7-41b6-4754-b63c-523277f7cf50"). InnerVolumeSpecName "kube-api-access-sw628". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:11:04 crc kubenswrapper[4730]: I0320 16:11:04.396065 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/962231f7-41b6-4754-b63c-523277f7cf50-inventory" (OuterVolumeSpecName: "inventory") pod "962231f7-41b6-4754-b63c-523277f7cf50" (UID: "962231f7-41b6-4754-b63c-523277f7cf50"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:11:04 crc kubenswrapper[4730]: I0320 16:11:04.397072 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/962231f7-41b6-4754-b63c-523277f7cf50-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "962231f7-41b6-4754-b63c-523277f7cf50" (UID: "962231f7-41b6-4754-b63c-523277f7cf50"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:11:04 crc kubenswrapper[4730]: I0320 16:11:04.472416 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw628\" (UniqueName: \"kubernetes.io/projected/962231f7-41b6-4754-b63c-523277f7cf50-kube-api-access-sw628\") on node \"crc\" DevicePath \"\"" Mar 20 16:11:04 crc kubenswrapper[4730]: I0320 16:11:04.472655 4730 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/962231f7-41b6-4754-b63c-523277f7cf50-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 20 16:11:04 crc kubenswrapper[4730]: I0320 16:11:04.472743 4730 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/962231f7-41b6-4754-b63c-523277f7cf50-inventory\") on node \"crc\" DevicePath \"\"" Mar 20 16:11:04 crc kubenswrapper[4730]: I0320 16:11:04.739314 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz" event={"ID":"962231f7-41b6-4754-b63c-523277f7cf50","Type":"ContainerDied","Data":"65f684006305344a33bcb883887a7102383b145f70b18f1bff5162dff68a6183"} Mar 20 16:11:04 crc kubenswrapper[4730]: I0320 16:11:04.739361 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65f684006305344a33bcb883887a7102383b145f70b18f1bff5162dff68a6183" Mar 20 16:11:04 crc kubenswrapper[4730]: I0320 16:11:04.739356 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz" Mar 20 16:11:04 crc kubenswrapper[4730]: I0320 16:11:04.843017 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dff2j"] Mar 20 16:11:04 crc kubenswrapper[4730]: E0320 16:11:04.843571 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962231f7-41b6-4754-b63c-523277f7cf50" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 20 16:11:04 crc kubenswrapper[4730]: I0320 16:11:04.843595 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="962231f7-41b6-4754-b63c-523277f7cf50" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 20 16:11:04 crc kubenswrapper[4730]: E0320 16:11:04.843633 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d689cf2-4142-40fd-9af3-13b98b99296d" containerName="oc" Mar 20 16:11:04 crc kubenswrapper[4730]: I0320 16:11:04.843643 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d689cf2-4142-40fd-9af3-13b98b99296d" containerName="oc" Mar 20 16:11:04 crc kubenswrapper[4730]: I0320 16:11:04.843877 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d689cf2-4142-40fd-9af3-13b98b99296d" containerName="oc" Mar 20 16:11:04 crc kubenswrapper[4730]: I0320 16:11:04.843920 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="962231f7-41b6-4754-b63c-523277f7cf50" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 20 16:11:04 crc kubenswrapper[4730]: I0320 16:11:04.844783 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dff2j" Mar 20 16:11:04 crc kubenswrapper[4730]: I0320 16:11:04.846623 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvsxx" Mar 20 16:11:04 crc kubenswrapper[4730]: I0320 16:11:04.851704 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 20 16:11:04 crc kubenswrapper[4730]: I0320 16:11:04.852660 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 20 16:11:04 crc kubenswrapper[4730]: I0320 16:11:04.857095 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 20 16:11:04 crc kubenswrapper[4730]: I0320 16:11:04.874226 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dff2j"] Mar 20 16:11:04 crc kubenswrapper[4730]: E0320 16:11:04.879434 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod962231f7_41b6_4754_b63c_523277f7cf50.slice\": RecentStats: unable to find data in memory cache]" Mar 20 16:11:04 crc kubenswrapper[4730]: I0320 16:11:04.982508 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca62ee94-4983-4acc-856a-3faf59cae3e1-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dff2j\" (UID: \"ca62ee94-4983-4acc-856a-3faf59cae3e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dff2j" Mar 20 16:11:04 crc kubenswrapper[4730]: I0320 16:11:04.982840 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkgt4\" (UniqueName: \"kubernetes.io/projected/ca62ee94-4983-4acc-856a-3faf59cae3e1-kube-api-access-lkgt4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dff2j\" (UID: \"ca62ee94-4983-4acc-856a-3faf59cae3e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dff2j" Mar 20 16:11:04 crc kubenswrapper[4730]: I0320 16:11:04.983013 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca62ee94-4983-4acc-856a-3faf59cae3e1-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dff2j\" (UID: \"ca62ee94-4983-4acc-856a-3faf59cae3e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dff2j" Mar 20 16:11:05 crc kubenswrapper[4730]: I0320 16:11:05.085536 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca62ee94-4983-4acc-856a-3faf59cae3e1-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dff2j\" (UID: \"ca62ee94-4983-4acc-856a-3faf59cae3e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dff2j" Mar 20 16:11:05 crc kubenswrapper[4730]: I0320 16:11:05.085594 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkgt4\" (UniqueName: \"kubernetes.io/projected/ca62ee94-4983-4acc-856a-3faf59cae3e1-kube-api-access-lkgt4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dff2j\" (UID: \"ca62ee94-4983-4acc-856a-3faf59cae3e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dff2j" Mar 20 16:11:05 crc kubenswrapper[4730]: I0320 16:11:05.085686 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca62ee94-4983-4acc-856a-3faf59cae3e1-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dff2j\" (UID: \"ca62ee94-4983-4acc-856a-3faf59cae3e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dff2j" Mar 20 16:11:05 crc kubenswrapper[4730]: I0320 16:11:05.097842 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca62ee94-4983-4acc-856a-3faf59cae3e1-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dff2j\" (UID: \"ca62ee94-4983-4acc-856a-3faf59cae3e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dff2j" Mar 20 16:11:05 crc kubenswrapper[4730]: I0320 16:11:05.098283 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca62ee94-4983-4acc-856a-3faf59cae3e1-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dff2j\" (UID: \"ca62ee94-4983-4acc-856a-3faf59cae3e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dff2j" Mar 20 16:11:05 crc kubenswrapper[4730]: I0320 16:11:05.102873 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkgt4\" (UniqueName: \"kubernetes.io/projected/ca62ee94-4983-4acc-856a-3faf59cae3e1-kube-api-access-lkgt4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dff2j\" (UID: \"ca62ee94-4983-4acc-856a-3faf59cae3e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dff2j" Mar 20 16:11:05 crc kubenswrapper[4730]: I0320 16:11:05.170949 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dff2j" Mar 20 16:11:05 crc kubenswrapper[4730]: W0320 16:11:05.743563 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca62ee94_4983_4acc_856a_3faf59cae3e1.slice/crio-538df20e0c75ebd57cd2833241a13b43747c6f23a8c6a93077516d777fb91e76 WatchSource:0}: Error finding container 538df20e0c75ebd57cd2833241a13b43747c6f23a8c6a93077516d777fb91e76: Status 404 returned error can't find the container with id 538df20e0c75ebd57cd2833241a13b43747c6f23a8c6a93077516d777fb91e76 Mar 20 16:11:05 crc kubenswrapper[4730]: I0320 16:11:05.745020 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dff2j"] Mar 20 16:11:06 crc kubenswrapper[4730]: I0320 16:11:06.759365 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dff2j" event={"ID":"ca62ee94-4983-4acc-856a-3faf59cae3e1","Type":"ContainerStarted","Data":"ba64daba5abdb4346cb601ada5074367a7382d23792549e6e38bb2f01ba55227"} Mar 20 16:11:06 crc kubenswrapper[4730]: I0320 16:11:06.759884 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dff2j" event={"ID":"ca62ee94-4983-4acc-856a-3faf59cae3e1","Type":"ContainerStarted","Data":"538df20e0c75ebd57cd2833241a13b43747c6f23a8c6a93077516d777fb91e76"} Mar 20 16:11:06 crc kubenswrapper[4730]: I0320 16:11:06.780546 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dff2j" podStartSLOduration=2.123453273 podStartE2EDuration="2.780517414s" podCreationTimestamp="2026-03-20 16:11:04 +0000 UTC" firstStartedPulling="2026-03-20 16:11:05.745793108 +0000 UTC m=+1924.959164477" lastFinishedPulling="2026-03-20 16:11:06.402857259 +0000 UTC m=+1925.616228618" observedRunningTime="2026-03-20 16:11:06.77721068 +0000 UTC m=+1925.990582049" watchObservedRunningTime="2026-03-20 16:11:06.780517414 +0000 UTC m=+1925.993888833" Mar 20 16:11:10 crc kubenswrapper[4730]: I0320 16:11:10.534113 4730 scope.go:117] "RemoveContainer" containerID="4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120" Mar 20 16:11:10 crc kubenswrapper[4730]: E0320 16:11:10.534614 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:11:24 crc kubenswrapper[4730]: I0320 16:11:24.533066 4730 scope.go:117] "RemoveContainer" containerID="4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120" Mar 20 16:11:24 crc kubenswrapper[4730]: E0320 16:11:24.534219 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:11:26 crc kubenswrapper[4730]: I0320 16:11:26.059566 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-4a43-account-create-update-cj4kg"] Mar 20 16:11:26 crc kubenswrapper[4730]: I0320 16:11:26.069126 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-4a43-account-create-update-cj4kg"] Mar 20 16:11:27 crc kubenswrapper[4730]: I0320 16:11:27.032026 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-d0d2-account-create-update-z6v46"] Mar 20 16:11:27 crc kubenswrapper[4730]: I0320 16:11:27.040747 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-d0d2-account-create-update-z6v46"] Mar 20 16:11:27 crc kubenswrapper[4730]: I0320 16:11:27.544895 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="383cf79a-0636-4175-bcf8-7e369f101901" path="/var/lib/kubelet/pods/383cf79a-0636-4175-bcf8-7e369f101901/volumes" Mar 20 16:11:27 crc kubenswrapper[4730]: I0320 16:11:27.545603 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7de61c5d-53ba-4d26-9a79-b82c2bc3b779" path="/var/lib/kubelet/pods/7de61c5d-53ba-4d26-9a79-b82c2bc3b779/volumes" Mar 20 16:11:28 crc kubenswrapper[4730]: I0320 16:11:28.052682 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-tv4tn"] Mar 20 16:11:28 crc kubenswrapper[4730]: I0320 16:11:28.063367 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-rlp9c"] Mar 20 16:11:28 crc kubenswrapper[4730]: I0320 16:11:28.074013 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-tv4tn"] Mar 20 16:11:28 crc kubenswrapper[4730]: I0320 16:11:28.084446 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-rlp9c"] Mar 20 16:11:28 crc kubenswrapper[4730]: I0320 16:11:28.093887 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-qt4mz"] Mar 20 16:11:28 crc kubenswrapper[4730]: I0320 16:11:28.104304 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-qt4mz"] Mar 20 16:11:28 crc kubenswrapper[4730]: I0320 16:11:28.114112 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-2850-account-create-update-4lrrq"] Mar 20 16:11:28 crc kubenswrapper[4730]: I0320 16:11:28.122426 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-2850-account-create-update-4lrrq"] Mar 20 16:11:29 crc kubenswrapper[4730]: I0320 16:11:29.551915 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f625a9e-a940-476b-85b2-ff54c5e87785" path="/var/lib/kubelet/pods/3f625a9e-a940-476b-85b2-ff54c5e87785/volumes" Mar 20 16:11:29 crc kubenswrapper[4730]: I0320 16:11:29.553095 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="475a52ba-bc8d-4c7b-ae99-330d6ec2b358" path="/var/lib/kubelet/pods/475a52ba-bc8d-4c7b-ae99-330d6ec2b358/volumes" Mar 20 16:11:29 crc kubenswrapper[4730]: I0320 16:11:29.553961 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1" path="/var/lib/kubelet/pods/6b9cde2b-3d05-44ba-898b-6fe8cfab4fe1/volumes" Mar 20 16:11:29 crc kubenswrapper[4730]: I0320 16:11:29.554772 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dac41622-7c80-4fce-a5ac-8a04d301669d" path="/var/lib/kubelet/pods/dac41622-7c80-4fce-a5ac-8a04d301669d/volumes" Mar 20 16:11:37 crc kubenswrapper[4730]: I0320 16:11:37.533521 4730 scope.go:117] "RemoveContainer" containerID="4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120" Mar 20 16:11:37 crc kubenswrapper[4730]: E0320 16:11:37.534294 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:11:48 crc kubenswrapper[4730]: I0320 16:11:48.289482 4730 scope.go:117] "RemoveContainer" containerID="3a26f3f5793abd65e69907fa90ac71e2abaa4ea13397a929de033f2bbf59a251" Mar 20 16:11:48 crc kubenswrapper[4730]: I0320 16:11:48.315023 4730 scope.go:117] "RemoveContainer" containerID="fa7fa1c1a12965d7d645639e89c04d923a3d343c7729667128d205eaaba9942e" Mar 20 16:11:48 crc kubenswrapper[4730]: I0320 16:11:48.365492 4730 scope.go:117] "RemoveContainer" containerID="b8f3077acd6da12cfe1f43474ad395781d9175a1c666f3763b8d16340af465ed" Mar 20 16:11:48 crc kubenswrapper[4730]: I0320 16:11:48.412156 4730 scope.go:117] "RemoveContainer" containerID="f2c396b5999dcbacb34f0cb38c776c483344cb3d6a6925954ac69d2fbac35de7" Mar 20 16:11:48 crc kubenswrapper[4730]: I0320 16:11:48.470519 4730 scope.go:117] "RemoveContainer" containerID="be1153307c9e28a344ac73169445af77d8ee3c7d9c2256c03916bd83fc0e8437" Mar 20 16:11:48 crc kubenswrapper[4730]: I0320 16:11:48.514019 4730 scope.go:117] "RemoveContainer" containerID="0d50b068c846deeebd08139bc4d49513ed414820309b36a4108d6ffa43871b84" Mar 20 16:11:49 crc kubenswrapper[4730]: I0320 16:11:49.533582 4730 scope.go:117] "RemoveContainer" containerID="4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120" Mar 20 16:11:49 crc kubenswrapper[4730]: E0320 16:11:49.534215 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:12:00 crc kubenswrapper[4730]: I0320 16:12:00.149668 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567052-fb4zl"] Mar 20 16:12:00 crc kubenswrapper[4730]: I0320 16:12:00.151646 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567052-fb4zl" Mar 20 16:12:00 crc kubenswrapper[4730]: I0320 16:12:00.154528 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:12:00 crc kubenswrapper[4730]: I0320 16:12:00.154531 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:12:00 crc kubenswrapper[4730]: I0320 16:12:00.154899 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:12:00 crc kubenswrapper[4730]: I0320 16:12:00.160713 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567052-fb4zl"] Mar 20 16:12:00 crc kubenswrapper[4730]: I0320 16:12:00.206702 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wstx\" (UniqueName: \"kubernetes.io/projected/28beb66f-2a64-4bcf-94eb-676ef7f1236a-kube-api-access-4wstx\") pod \"auto-csr-approver-29567052-fb4zl\" (UID: \"28beb66f-2a64-4bcf-94eb-676ef7f1236a\") " pod="openshift-infra/auto-csr-approver-29567052-fb4zl" Mar 20 16:12:00 crc kubenswrapper[4730]: I0320 16:12:00.308693 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wstx\" (UniqueName: \"kubernetes.io/projected/28beb66f-2a64-4bcf-94eb-676ef7f1236a-kube-api-access-4wstx\") pod \"auto-csr-approver-29567052-fb4zl\" (UID: \"28beb66f-2a64-4bcf-94eb-676ef7f1236a\") " pod="openshift-infra/auto-csr-approver-29567052-fb4zl" Mar 20 16:12:00 crc kubenswrapper[4730]: I0320 16:12:00.328164 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wstx\" (UniqueName: \"kubernetes.io/projected/28beb66f-2a64-4bcf-94eb-676ef7f1236a-kube-api-access-4wstx\") pod \"auto-csr-approver-29567052-fb4zl\" (UID: \"28beb66f-2a64-4bcf-94eb-676ef7f1236a\") " pod="openshift-infra/auto-csr-approver-29567052-fb4zl" Mar 20 16:12:00 crc kubenswrapper[4730]: I0320 16:12:00.470738 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567052-fb4zl" Mar 20 16:12:01 crc kubenswrapper[4730]: I0320 16:12:01.053625 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qsrbd"] Mar 20 16:12:01 crc kubenswrapper[4730]: I0320 16:12:01.066173 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qsrbd"] Mar 20 16:12:01 crc kubenswrapper[4730]: I0320 16:12:01.077141 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567052-fb4zl"] Mar 20 16:12:01 crc kubenswrapper[4730]: I0320 16:12:01.294933 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567052-fb4zl" event={"ID":"28beb66f-2a64-4bcf-94eb-676ef7f1236a","Type":"ContainerStarted","Data":"54935f3ad3700c2130e20a5d274c6d8117588eac3d0758ef70339f1dc6c158e7"} Mar 20 16:12:01 crc kubenswrapper[4730]: I0320 16:12:01.532941 4730 scope.go:117] "RemoveContainer" containerID="4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120" Mar 20 16:12:01 crc kubenswrapper[4730]: E0320 16:12:01.533177 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:12:01 crc kubenswrapper[4730]: I0320 16:12:01.544709 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="279d2368-abe1-465a-9007-68542e5dbfc4" path="/var/lib/kubelet/pods/279d2368-abe1-465a-9007-68542e5dbfc4/volumes" Mar 20 16:12:04 crc kubenswrapper[4730]: I0320 16:12:04.336448 4730 generic.go:334] "Generic (PLEG): container finished" podID="28beb66f-2a64-4bcf-94eb-676ef7f1236a" containerID="09bf1a5b6b98230c97ec660d74eb6fc018c3a7b8b7105355e719108bd3861003" exitCode=0 Mar 20 16:12:04 crc kubenswrapper[4730]: I0320 16:12:04.336938 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567052-fb4zl" event={"ID":"28beb66f-2a64-4bcf-94eb-676ef7f1236a","Type":"ContainerDied","Data":"09bf1a5b6b98230c97ec660d74eb6fc018c3a7b8b7105355e719108bd3861003"} Mar 20 16:12:05 crc kubenswrapper[4730]: I0320 16:12:05.689396 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567052-fb4zl" Mar 20 16:12:05 crc kubenswrapper[4730]: I0320 16:12:05.830181 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wstx\" (UniqueName: \"kubernetes.io/projected/28beb66f-2a64-4bcf-94eb-676ef7f1236a-kube-api-access-4wstx\") pod \"28beb66f-2a64-4bcf-94eb-676ef7f1236a\" (UID: \"28beb66f-2a64-4bcf-94eb-676ef7f1236a\") " Mar 20 16:12:05 crc kubenswrapper[4730]: I0320 16:12:05.837390 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28beb66f-2a64-4bcf-94eb-676ef7f1236a-kube-api-access-4wstx" (OuterVolumeSpecName: "kube-api-access-4wstx") pod "28beb66f-2a64-4bcf-94eb-676ef7f1236a" (UID: "28beb66f-2a64-4bcf-94eb-676ef7f1236a"). InnerVolumeSpecName "kube-api-access-4wstx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:12:05 crc kubenswrapper[4730]: I0320 16:12:05.932690 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wstx\" (UniqueName: \"kubernetes.io/projected/28beb66f-2a64-4bcf-94eb-676ef7f1236a-kube-api-access-4wstx\") on node \"crc\" DevicePath \"\"" Mar 20 16:12:06 crc kubenswrapper[4730]: I0320 16:12:06.356083 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567052-fb4zl" event={"ID":"28beb66f-2a64-4bcf-94eb-676ef7f1236a","Type":"ContainerDied","Data":"54935f3ad3700c2130e20a5d274c6d8117588eac3d0758ef70339f1dc6c158e7"} Mar 20 16:12:06 crc kubenswrapper[4730]: I0320 16:12:06.356163 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54935f3ad3700c2130e20a5d274c6d8117588eac3d0758ef70339f1dc6c158e7" Mar 20 16:12:06 crc kubenswrapper[4730]: I0320 16:12:06.356127 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567052-fb4zl" Mar 20 16:12:06 crc kubenswrapper[4730]: I0320 16:12:06.751894 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567046-8f6sv"] Mar 20 16:12:06 crc kubenswrapper[4730]: I0320 16:12:06.762277 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567046-8f6sv"] Mar 20 16:12:07 crc kubenswrapper[4730]: I0320 16:12:07.545706 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dacfdca-1b6e-4336-8089-722d36388128" path="/var/lib/kubelet/pods/8dacfdca-1b6e-4336-8089-722d36388128/volumes" Mar 20 16:12:13 crc kubenswrapper[4730]: I0320 16:12:13.535240 4730 scope.go:117] "RemoveContainer" containerID="4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120" Mar 20 16:12:13 crc kubenswrapper[4730]: E0320 16:12:13.535991 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:12:15 crc kubenswrapper[4730]: I0320 16:12:15.464110 4730 generic.go:334] "Generic (PLEG): container finished" podID="ca62ee94-4983-4acc-856a-3faf59cae3e1" containerID="ba64daba5abdb4346cb601ada5074367a7382d23792549e6e38bb2f01ba55227" exitCode=0 Mar 20 16:12:15 crc kubenswrapper[4730]: I0320 16:12:15.464172 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dff2j" event={"ID":"ca62ee94-4983-4acc-856a-3faf59cae3e1","Type":"ContainerDied","Data":"ba64daba5abdb4346cb601ada5074367a7382d23792549e6e38bb2f01ba55227"} Mar 20 16:12:16 crc kubenswrapper[4730]: I0320 16:12:16.896327 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dff2j" Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.047902 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkgt4\" (UniqueName: \"kubernetes.io/projected/ca62ee94-4983-4acc-856a-3faf59cae3e1-kube-api-access-lkgt4\") pod \"ca62ee94-4983-4acc-856a-3faf59cae3e1\" (UID: \"ca62ee94-4983-4acc-856a-3faf59cae3e1\") " Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.049683 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca62ee94-4983-4acc-856a-3faf59cae3e1-ssh-key-openstack-edpm-ipam\") pod \"ca62ee94-4983-4acc-856a-3faf59cae3e1\" (UID: \"ca62ee94-4983-4acc-856a-3faf59cae3e1\") " Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.049949 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca62ee94-4983-4acc-856a-3faf59cae3e1-inventory\") pod \"ca62ee94-4983-4acc-856a-3faf59cae3e1\" (UID: \"ca62ee94-4983-4acc-856a-3faf59cae3e1\") " Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.054623 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca62ee94-4983-4acc-856a-3faf59cae3e1-kube-api-access-lkgt4" (OuterVolumeSpecName: "kube-api-access-lkgt4") pod "ca62ee94-4983-4acc-856a-3faf59cae3e1" (UID: "ca62ee94-4983-4acc-856a-3faf59cae3e1"). InnerVolumeSpecName "kube-api-access-lkgt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.082463 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca62ee94-4983-4acc-856a-3faf59cae3e1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ca62ee94-4983-4acc-856a-3faf59cae3e1" (UID: "ca62ee94-4983-4acc-856a-3faf59cae3e1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.086574 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca62ee94-4983-4acc-856a-3faf59cae3e1-inventory" (OuterVolumeSpecName: "inventory") pod "ca62ee94-4983-4acc-856a-3faf59cae3e1" (UID: "ca62ee94-4983-4acc-856a-3faf59cae3e1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.152293 4730 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca62ee94-4983-4acc-856a-3faf59cae3e1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.152330 4730 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca62ee94-4983-4acc-856a-3faf59cae3e1-inventory\") on node \"crc\" DevicePath \"\"" Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.152343 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkgt4\" (UniqueName: \"kubernetes.io/projected/ca62ee94-4983-4acc-856a-3faf59cae3e1-kube-api-access-lkgt4\") on node \"crc\" DevicePath \"\"" Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.490925 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dff2j" event={"ID":"ca62ee94-4983-4acc-856a-3faf59cae3e1","Type":"ContainerDied","Data":"538df20e0c75ebd57cd2833241a13b43747c6f23a8c6a93077516d777fb91e76"} Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.490963 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="538df20e0c75ebd57cd2833241a13b43747c6f23a8c6a93077516d777fb91e76" Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.491009 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dff2j" Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.572950 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt"] Mar 20 16:12:17 crc kubenswrapper[4730]: E0320 16:12:17.573457 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28beb66f-2a64-4bcf-94eb-676ef7f1236a" containerName="oc" Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.573478 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="28beb66f-2a64-4bcf-94eb-676ef7f1236a" containerName="oc" Mar 20 16:12:17 crc kubenswrapper[4730]: E0320 16:12:17.573520 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca62ee94-4983-4acc-856a-3faf59cae3e1" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.573532 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca62ee94-4983-4acc-856a-3faf59cae3e1" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.573766 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca62ee94-4983-4acc-856a-3faf59cae3e1" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.573795 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="28beb66f-2a64-4bcf-94eb-676ef7f1236a" containerName="oc" Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.574641 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt"] Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.574740 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt" Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.577312 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvsxx" Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.577663 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.577906 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.578632 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.664333 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt\" (UID: \"cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt" Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.664732 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt\" (UID: \"cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt" Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.664758 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jghq5\" (UniqueName: \"kubernetes.io/projected/cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122-kube-api-access-jghq5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt\" (UID: \"cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt" Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.766625 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt\" (UID: \"cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt" Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.766696 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jghq5\" (UniqueName: \"kubernetes.io/projected/cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122-kube-api-access-jghq5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt\" (UID: \"cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt" Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.766839 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt\" (UID: \"cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt" Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.772572 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt\" (UID: \"cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt" Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.783028 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt\" (UID: \"cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt" Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.785848 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jghq5\" (UniqueName: \"kubernetes.io/projected/cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122-kube-api-access-jghq5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt\" (UID: \"cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt" Mar 20 16:12:17 crc kubenswrapper[4730]: I0320 16:12:17.917943 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt" Mar 20 16:12:18 crc kubenswrapper[4730]: I0320 16:12:18.432036 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt"] Mar 20 16:12:18 crc kubenswrapper[4730]: I0320 16:12:18.501053 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt" event={"ID":"cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122","Type":"ContainerStarted","Data":"ba38e4aed467787dbc07075fa373dd244be411bd464608954c22c323b546253a"} Mar 20 16:12:20 crc kubenswrapper[4730]: I0320 16:12:20.523229 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt" event={"ID":"cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122","Type":"ContainerStarted","Data":"ea1db0887cceb3db382e0f4b2a1444525bccb60f8caf78058e5d02523c9362c1"} Mar 20 16:12:20 crc kubenswrapper[4730]: I0320 16:12:20.547563 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt" podStartSLOduration=2.705316993 podStartE2EDuration="3.547538477s" podCreationTimestamp="2026-03-20 16:12:17 +0000 UTC" firstStartedPulling="2026-03-20 16:12:18.442732538 +0000 UTC m=+1997.656103907" lastFinishedPulling="2026-03-20 16:12:19.284954022 +0000 UTC m=+1998.498325391" observedRunningTime="2026-03-20 16:12:20.54555456 +0000 UTC m=+1999.758925929" watchObservedRunningTime="2026-03-20 16:12:20.547538477 +0000 UTC m=+1999.760909876" Mar 20 16:12:22 crc kubenswrapper[4730]: I0320 16:12:22.029706 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-f7rjc"] Mar 20 16:12:22 crc kubenswrapper[4730]: I0320 16:12:22.041069 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-f7rjc"] Mar 20 16:12:23 crc kubenswrapper[4730]: I0320 16:12:23.544053 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f144e50-8d18-49a5-a3ef-84b72e6e119f" path="/var/lib/kubelet/pods/8f144e50-8d18-49a5-a3ef-84b72e6e119f/volumes" Mar 20 16:12:24 crc kubenswrapper[4730]: I0320 16:12:24.533966 4730 scope.go:117] "RemoveContainer" containerID="4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120" Mar 20 16:12:24 crc kubenswrapper[4730]: E0320 16:12:24.534748 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:12:24 crc kubenswrapper[4730]: I0320 16:12:24.566665 4730 generic.go:334] "Generic (PLEG): container finished" podID="cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122" containerID="ea1db0887cceb3db382e0f4b2a1444525bccb60f8caf78058e5d02523c9362c1" exitCode=0 Mar 20 16:12:24 crc kubenswrapper[4730]: I0320 16:12:24.566721 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt" event={"ID":"cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122","Type":"ContainerDied","Data":"ea1db0887cceb3db382e0f4b2a1444525bccb60f8caf78058e5d02523c9362c1"} Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.034985 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt" Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.155561 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122-inventory\") pod \"cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122\" (UID: \"cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122\") " Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.155670 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jghq5\" (UniqueName: \"kubernetes.io/projected/cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122-kube-api-access-jghq5\") pod \"cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122\" (UID: \"cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122\") " Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.155749 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122-ssh-key-openstack-edpm-ipam\") pod \"cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122\" (UID: \"cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122\") " Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.169102 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122-kube-api-access-jghq5" (OuterVolumeSpecName: "kube-api-access-jghq5") pod "cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122" (UID: "cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122"). InnerVolumeSpecName "kube-api-access-jghq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.183302 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122-inventory" (OuterVolumeSpecName: "inventory") pod "cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122" (UID: "cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.186229 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122" (UID: "cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.258049 4730 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122-inventory\") on node \"crc\" DevicePath \"\"" Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.258077 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jghq5\" (UniqueName: \"kubernetes.io/projected/cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122-kube-api-access-jghq5\") on node \"crc\" DevicePath \"\"" Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.258087 4730 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.591907 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt" event={"ID":"cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122","Type":"ContainerDied","Data":"ba38e4aed467787dbc07075fa373dd244be411bd464608954c22c323b546253a"} Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.591948 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba38e4aed467787dbc07075fa373dd244be411bd464608954c22c323b546253a" Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.591991 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt" Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.690889 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dkksx"] Mar 20 16:12:26 crc kubenswrapper[4730]: E0320 16:12:26.691430 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.691451 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.691718 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.692601 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dkksx" Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.695341 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.695874 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.695951 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvsxx" Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.696176 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.703613 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dkksx"] Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.867568 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/133f1969-bed7-44cd-9dac-b9dfaa376515-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dkksx\" (UID: \"133f1969-bed7-44cd-9dac-b9dfaa376515\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dkksx" Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.867840 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/133f1969-bed7-44cd-9dac-b9dfaa376515-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dkksx\" (UID: \"133f1969-bed7-44cd-9dac-b9dfaa376515\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dkksx" Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.867911 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txlld\" (UniqueName: \"kubernetes.io/projected/133f1969-bed7-44cd-9dac-b9dfaa376515-kube-api-access-txlld\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dkksx\" (UID: \"133f1969-bed7-44cd-9dac-b9dfaa376515\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dkksx" Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.970288 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/133f1969-bed7-44cd-9dac-b9dfaa376515-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dkksx\" (UID: \"133f1969-bed7-44cd-9dac-b9dfaa376515\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dkksx" Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.970719 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/133f1969-bed7-44cd-9dac-b9dfaa376515-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dkksx\" (UID: \"133f1969-bed7-44cd-9dac-b9dfaa376515\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dkksx" Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.970843 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txlld\" (UniqueName: \"kubernetes.io/projected/133f1969-bed7-44cd-9dac-b9dfaa376515-kube-api-access-txlld\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dkksx\" (UID: \"133f1969-bed7-44cd-9dac-b9dfaa376515\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dkksx" Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.975521 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/133f1969-bed7-44cd-9dac-b9dfaa376515-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dkksx\" (UID: \"133f1969-bed7-44cd-9dac-b9dfaa376515\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dkksx" Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.975741 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/133f1969-bed7-44cd-9dac-b9dfaa376515-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dkksx\" (UID: \"133f1969-bed7-44cd-9dac-b9dfaa376515\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dkksx" Mar 20 16:12:26 crc kubenswrapper[4730]: I0320 16:12:26.987749 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txlld\" (UniqueName: \"kubernetes.io/projected/133f1969-bed7-44cd-9dac-b9dfaa376515-kube-api-access-txlld\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dkksx\" (UID: \"133f1969-bed7-44cd-9dac-b9dfaa376515\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dkksx" Mar 20 16:12:27 crc kubenswrapper[4730]: I0320 16:12:27.018055 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dkksx" Mar 20 16:12:27 crc kubenswrapper[4730]: I0320 16:12:27.545791 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dkksx"] Mar 20 16:12:27 crc kubenswrapper[4730]: I0320 16:12:27.601138 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dkksx" event={"ID":"133f1969-bed7-44cd-9dac-b9dfaa376515","Type":"ContainerStarted","Data":"e961a9b21d5ba22c9457891c45fc6f0b988237cbd4055dc60985a974e06c6a15"} Mar 20 16:12:28 crc kubenswrapper[4730]: I0320 16:12:28.067586 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cdhdz"] Mar 20 16:12:28 crc kubenswrapper[4730]: I0320 16:12:28.082160 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cdhdz"] Mar 20 16:12:29 crc kubenswrapper[4730]: I0320 16:12:29.547587 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647" path="/var/lib/kubelet/pods/ddb7b5ed-16a0-4fb8-96d3-df3c9c2cd647/volumes" Mar 20 16:12:29 crc kubenswrapper[4730]: I0320 16:12:29.620736 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dkksx" event={"ID":"133f1969-bed7-44cd-9dac-b9dfaa376515","Type":"ContainerStarted","Data":"c4cdbcf8144688f6a26daaef331c10d18244b2a1a28d3b1d4832f69e23275537"} Mar 20 16:12:38 crc kubenswrapper[4730]: I0320 16:12:38.533424 4730 scope.go:117] "RemoveContainer" containerID="4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120" Mar 20 16:12:38 crc kubenswrapper[4730]: E0320 16:12:38.534389 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:12:48 crc kubenswrapper[4730]: I0320 16:12:48.671433 4730 scope.go:117] "RemoveContainer" containerID="78840c5174380df1ec37853cf5867820b6c6e093e27294e16eb3795a80e9c2a8" Mar 20 16:12:48 crc kubenswrapper[4730]: I0320 16:12:48.719411 4730 scope.go:117] "RemoveContainer" containerID="6e67353e8a39d519cf4269a9771ca30ae4c8d30443c293283645c96cf02f2776" Mar 20 16:12:48 crc kubenswrapper[4730]: I0320 16:12:48.768400 4730 scope.go:117] "RemoveContainer" containerID="79f3caf37f32c7308d415a20940a3f1cbb774116c3657a269e41ea28bde4ad32" Mar 20 16:12:48 crc kubenswrapper[4730]: I0320 16:12:48.817886 4730 scope.go:117] "RemoveContainer" containerID="712419554ee7980049c60af4c6c43298daddafb72a74d5efc70ba46df50bba0e" Mar 20 16:12:50 crc kubenswrapper[4730]: I0320 16:12:50.533166 4730 scope.go:117] "RemoveContainer" containerID="4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120" Mar 20 16:12:50 crc kubenswrapper[4730]: E0320 16:12:50.533795 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:13:04 crc kubenswrapper[4730]: I0320 16:13:04.533163 4730 scope.go:117] "RemoveContainer" containerID="4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120" Mar 20 16:13:04 crc kubenswrapper[4730]: E0320 16:13:04.533983 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:13:04 crc kubenswrapper[4730]: I0320 16:13:04.943967 4730 generic.go:334] "Generic (PLEG): container finished" podID="133f1969-bed7-44cd-9dac-b9dfaa376515" containerID="c4cdbcf8144688f6a26daaef331c10d18244b2a1a28d3b1d4832f69e23275537" exitCode=0 Mar 20 16:13:04 crc kubenswrapper[4730]: I0320 16:13:04.944033 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dkksx" event={"ID":"133f1969-bed7-44cd-9dac-b9dfaa376515","Type":"ContainerDied","Data":"c4cdbcf8144688f6a26daaef331c10d18244b2a1a28d3b1d4832f69e23275537"} Mar 20 16:13:06 crc kubenswrapper[4730]: I0320 16:13:06.050377 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-5j2w4"] Mar 20 16:13:06 crc kubenswrapper[4730]: I0320 16:13:06.057757 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-5j2w4"] Mar 20 16:13:06 crc kubenswrapper[4730]: I0320 16:13:06.396559 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dkksx" Mar 20 16:13:06 crc kubenswrapper[4730]: I0320 16:13:06.590995 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/133f1969-bed7-44cd-9dac-b9dfaa376515-inventory\") pod \"133f1969-bed7-44cd-9dac-b9dfaa376515\" (UID: \"133f1969-bed7-44cd-9dac-b9dfaa376515\") " Mar 20 16:13:06 crc kubenswrapper[4730]: I0320 16:13:06.591543 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txlld\" (UniqueName: \"kubernetes.io/projected/133f1969-bed7-44cd-9dac-b9dfaa376515-kube-api-access-txlld\") pod \"133f1969-bed7-44cd-9dac-b9dfaa376515\" (UID: \"133f1969-bed7-44cd-9dac-b9dfaa376515\") " Mar 20 16:13:06 crc kubenswrapper[4730]: I0320 16:13:06.591586 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/133f1969-bed7-44cd-9dac-b9dfaa376515-ssh-key-openstack-edpm-ipam\") pod \"133f1969-bed7-44cd-9dac-b9dfaa376515\" (UID: \"133f1969-bed7-44cd-9dac-b9dfaa376515\") " Mar 20 16:13:06 crc kubenswrapper[4730]: I0320 16:13:06.605214 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/133f1969-bed7-44cd-9dac-b9dfaa376515-kube-api-access-txlld" (OuterVolumeSpecName: "kube-api-access-txlld") pod "133f1969-bed7-44cd-9dac-b9dfaa376515" (UID: "133f1969-bed7-44cd-9dac-b9dfaa376515"). InnerVolumeSpecName "kube-api-access-txlld". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:13:06 crc kubenswrapper[4730]: I0320 16:13:06.617730 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/133f1969-bed7-44cd-9dac-b9dfaa376515-inventory" (OuterVolumeSpecName: "inventory") pod "133f1969-bed7-44cd-9dac-b9dfaa376515" (UID: "133f1969-bed7-44cd-9dac-b9dfaa376515"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:13:06 crc kubenswrapper[4730]: I0320 16:13:06.624230 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/133f1969-bed7-44cd-9dac-b9dfaa376515-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "133f1969-bed7-44cd-9dac-b9dfaa376515" (UID: "133f1969-bed7-44cd-9dac-b9dfaa376515"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:13:06 crc kubenswrapper[4730]: I0320 16:13:06.693989 4730 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/133f1969-bed7-44cd-9dac-b9dfaa376515-inventory\") on node \"crc\" DevicePath \"\"" Mar 20 16:13:06 crc kubenswrapper[4730]: I0320 16:13:06.694019 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txlld\" (UniqueName: \"kubernetes.io/projected/133f1969-bed7-44cd-9dac-b9dfaa376515-kube-api-access-txlld\") on node \"crc\" DevicePath \"\"" Mar 20 16:13:06 crc kubenswrapper[4730]: I0320 16:13:06.694029 4730 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/133f1969-bed7-44cd-9dac-b9dfaa376515-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 20 16:13:06 crc kubenswrapper[4730]: I0320 16:13:06.970631 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dkksx" event={"ID":"133f1969-bed7-44cd-9dac-b9dfaa376515","Type":"ContainerDied","Data":"e961a9b21d5ba22c9457891c45fc6f0b988237cbd4055dc60985a974e06c6a15"} Mar 20 16:13:06 crc kubenswrapper[4730]: I0320 16:13:06.970671 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e961a9b21d5ba22c9457891c45fc6f0b988237cbd4055dc60985a974e06c6a15" Mar 20 16:13:06 crc kubenswrapper[4730]: I0320 16:13:06.970714 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dkksx" Mar 20 16:13:07 crc kubenswrapper[4730]: I0320 16:13:07.079234 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m89bj"] Mar 20 16:13:07 crc kubenswrapper[4730]: E0320 16:13:07.079787 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="133f1969-bed7-44cd-9dac-b9dfaa376515" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 20 16:13:07 crc kubenswrapper[4730]: I0320 16:13:07.079805 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="133f1969-bed7-44cd-9dac-b9dfaa376515" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 20 16:13:07 crc kubenswrapper[4730]: I0320 16:13:07.080040 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="133f1969-bed7-44cd-9dac-b9dfaa376515" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 20 16:13:07 crc kubenswrapper[4730]: I0320 16:13:07.080829 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m89bj" Mar 20 16:13:07 crc kubenswrapper[4730]: I0320 16:13:07.087994 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 20 16:13:07 crc kubenswrapper[4730]: I0320 16:13:07.087935 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 20 16:13:07 crc kubenswrapper[4730]: I0320 16:13:07.088228 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvsxx" Mar 20 16:13:07 crc kubenswrapper[4730]: I0320 16:13:07.088346 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 20 16:13:07 crc kubenswrapper[4730]: I0320 16:13:07.100578 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8c27e63-ebf9-45ff-87b2-4782b20e19e3-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m89bj\" (UID: \"a8c27e63-ebf9-45ff-87b2-4782b20e19e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m89bj" Mar 20 16:13:07 crc kubenswrapper[4730]: I0320 16:13:07.100636 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5q7k\" (UniqueName: \"kubernetes.io/projected/a8c27e63-ebf9-45ff-87b2-4782b20e19e3-kube-api-access-k5q7k\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m89bj\" (UID: \"a8c27e63-ebf9-45ff-87b2-4782b20e19e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m89bj" Mar 20 16:13:07 crc kubenswrapper[4730]: I0320 16:13:07.100692 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8c27e63-ebf9-45ff-87b2-4782b20e19e3-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m89bj\" (UID: \"a8c27e63-ebf9-45ff-87b2-4782b20e19e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m89bj" Mar 20 16:13:07 crc kubenswrapper[4730]: I0320 16:13:07.106004 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m89bj"] Mar 20 16:13:07 crc kubenswrapper[4730]: I0320 16:13:07.203011 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8c27e63-ebf9-45ff-87b2-4782b20e19e3-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m89bj\" (UID: \"a8c27e63-ebf9-45ff-87b2-4782b20e19e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m89bj" Mar 20 16:13:07 crc kubenswrapper[4730]: I0320 16:13:07.203083 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5q7k\" (UniqueName: \"kubernetes.io/projected/a8c27e63-ebf9-45ff-87b2-4782b20e19e3-kube-api-access-k5q7k\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m89bj\" (UID: \"a8c27e63-ebf9-45ff-87b2-4782b20e19e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m89bj" Mar 20 16:13:07 crc kubenswrapper[4730]: I0320 16:13:07.203169 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8c27e63-ebf9-45ff-87b2-4782b20e19e3-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m89bj\" (UID: \"a8c27e63-ebf9-45ff-87b2-4782b20e19e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m89bj" Mar 20 16:13:07 crc kubenswrapper[4730]: I0320 16:13:07.206929 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8c27e63-ebf9-45ff-87b2-4782b20e19e3-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m89bj\" (UID: \"a8c27e63-ebf9-45ff-87b2-4782b20e19e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m89bj" Mar 20 16:13:07 crc kubenswrapper[4730]: I0320 16:13:07.207080 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8c27e63-ebf9-45ff-87b2-4782b20e19e3-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m89bj\" (UID: \"a8c27e63-ebf9-45ff-87b2-4782b20e19e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m89bj" Mar 20 16:13:07 crc kubenswrapper[4730]: I0320 16:13:07.222480 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5q7k\" (UniqueName: \"kubernetes.io/projected/a8c27e63-ebf9-45ff-87b2-4782b20e19e3-kube-api-access-k5q7k\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m89bj\" (UID: \"a8c27e63-ebf9-45ff-87b2-4782b20e19e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m89bj" Mar 20 16:13:07 crc kubenswrapper[4730]: I0320 16:13:07.402134 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m89bj" Mar 20 16:13:07 crc kubenswrapper[4730]: I0320 16:13:07.546818 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6941d556-3020-4344-b185-5d79cf68187c" path="/var/lib/kubelet/pods/6941d556-3020-4344-b185-5d79cf68187c/volumes" Mar 20 16:13:07 crc kubenswrapper[4730]: I0320 16:13:07.718642 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m89bj"] Mar 20 16:13:07 crc kubenswrapper[4730]: I0320 16:13:07.980615 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m89bj" event={"ID":"a8c27e63-ebf9-45ff-87b2-4782b20e19e3","Type":"ContainerStarted","Data":"01d39be61543c09265f967d5b7ada582d06aa7ac17db1c635eb6ef1c127d2c54"} Mar 20 16:13:08 crc kubenswrapper[4730]: I0320 16:13:08.991742 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m89bj" event={"ID":"a8c27e63-ebf9-45ff-87b2-4782b20e19e3","Type":"ContainerStarted","Data":"f5e9f1ce126d736eed319722f72b288bfa7b28dcea13acceb2dec38e7da569bc"} Mar 20 16:13:09 crc kubenswrapper[4730]: I0320 16:13:09.016146 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m89bj" podStartSLOduration=1.492085203 podStartE2EDuration="2.016131757s" podCreationTimestamp="2026-03-20 16:13:07 +0000 UTC" firstStartedPulling="2026-03-20 16:13:07.721547067 +0000 UTC m=+2046.934918436" lastFinishedPulling="2026-03-20 16:13:08.245593591 +0000 UTC m=+2047.458964990" observedRunningTime="2026-03-20 16:13:09.01442872 +0000 UTC m=+2048.227800089" watchObservedRunningTime="2026-03-20 16:13:09.016131757 +0000 UTC m=+2048.229503126" Mar 20 16:13:18 crc kubenswrapper[4730]: I0320 16:13:18.533411 4730 scope.go:117] "RemoveContainer" containerID="4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120" Mar 20 16:13:18 crc kubenswrapper[4730]: E0320 16:13:18.534355 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:13:32 crc kubenswrapper[4730]: I0320 16:13:32.346053 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cf7jm"] Mar 20 16:13:32 crc kubenswrapper[4730]: I0320 16:13:32.348669 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cf7jm" Mar 20 16:13:32 crc kubenswrapper[4730]: I0320 16:13:32.360827 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cf7jm"] Mar 20 16:13:32 crc kubenswrapper[4730]: I0320 16:13:32.425835 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04ed0a2f-539e-4f9e-b106-6012931ca0f2-catalog-content\") pod \"redhat-marketplace-cf7jm\" (UID: \"04ed0a2f-539e-4f9e-b106-6012931ca0f2\") " pod="openshift-marketplace/redhat-marketplace-cf7jm" Mar 20 16:13:32 crc kubenswrapper[4730]: I0320 16:13:32.425955 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04ed0a2f-539e-4f9e-b106-6012931ca0f2-utilities\") pod \"redhat-marketplace-cf7jm\" (UID: \"04ed0a2f-539e-4f9e-b106-6012931ca0f2\") " pod="openshift-marketplace/redhat-marketplace-cf7jm" Mar 20 16:13:32 crc kubenswrapper[4730]: I0320 16:13:32.426178 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6t25\" (UniqueName: \"kubernetes.io/projected/04ed0a2f-539e-4f9e-b106-6012931ca0f2-kube-api-access-r6t25\") pod \"redhat-marketplace-cf7jm\" (UID: \"04ed0a2f-539e-4f9e-b106-6012931ca0f2\") " pod="openshift-marketplace/redhat-marketplace-cf7jm" Mar 20 16:13:32 crc kubenswrapper[4730]: I0320 16:13:32.527552 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04ed0a2f-539e-4f9e-b106-6012931ca0f2-utilities\") pod \"redhat-marketplace-cf7jm\" (UID: \"04ed0a2f-539e-4f9e-b106-6012931ca0f2\") " pod="openshift-marketplace/redhat-marketplace-cf7jm" Mar 20 16:13:32 crc kubenswrapper[4730]: I0320 16:13:32.527681 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6t25\" (UniqueName: \"kubernetes.io/projected/04ed0a2f-539e-4f9e-b106-6012931ca0f2-kube-api-access-r6t25\") pod \"redhat-marketplace-cf7jm\" (UID: \"04ed0a2f-539e-4f9e-b106-6012931ca0f2\") " pod="openshift-marketplace/redhat-marketplace-cf7jm" Mar 20 16:13:32 crc kubenswrapper[4730]: I0320 16:13:32.527778 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04ed0a2f-539e-4f9e-b106-6012931ca0f2-catalog-content\") pod \"redhat-marketplace-cf7jm\" (UID: \"04ed0a2f-539e-4f9e-b106-6012931ca0f2\") " pod="openshift-marketplace/redhat-marketplace-cf7jm" Mar 20 16:13:32 crc kubenswrapper[4730]: I0320 16:13:32.528135 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04ed0a2f-539e-4f9e-b106-6012931ca0f2-utilities\") pod \"redhat-marketplace-cf7jm\" (UID: \"04ed0a2f-539e-4f9e-b106-6012931ca0f2\") " pod="openshift-marketplace/redhat-marketplace-cf7jm" Mar 20 16:13:32 crc kubenswrapper[4730]: I0320 16:13:32.528151 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04ed0a2f-539e-4f9e-b106-6012931ca0f2-catalog-content\") pod \"redhat-marketplace-cf7jm\" (UID: \"04ed0a2f-539e-4f9e-b106-6012931ca0f2\") " pod="openshift-marketplace/redhat-marketplace-cf7jm" Mar 20 16:13:32 crc kubenswrapper[4730]: I0320 16:13:32.548044 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6t25\" (UniqueName: \"kubernetes.io/projected/04ed0a2f-539e-4f9e-b106-6012931ca0f2-kube-api-access-r6t25\") pod \"redhat-marketplace-cf7jm\" (UID: \"04ed0a2f-539e-4f9e-b106-6012931ca0f2\") " pod="openshift-marketplace/redhat-marketplace-cf7jm" Mar 20 16:13:32 crc kubenswrapper[4730]: I0320 16:13:32.671531 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cf7jm" Mar 20 16:13:32 crc kubenswrapper[4730]: I0320 16:13:32.941611 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kknb6"] Mar 20 16:13:32 crc kubenswrapper[4730]: I0320 16:13:32.945584 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kknb6" Mar 20 16:13:32 crc kubenswrapper[4730]: I0320 16:13:32.976734 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kknb6"] Mar 20 16:13:33 crc kubenswrapper[4730]: I0320 16:13:33.043639 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c64hh\" (UniqueName: \"kubernetes.io/projected/b134de24-b8f7-4727-b32f-82c48b28787c-kube-api-access-c64hh\") pod \"community-operators-kknb6\" (UID: \"b134de24-b8f7-4727-b32f-82c48b28787c\") " pod="openshift-marketplace/community-operators-kknb6" Mar 20 16:13:33 crc kubenswrapper[4730]: I0320 16:13:33.043836 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b134de24-b8f7-4727-b32f-82c48b28787c-catalog-content\") pod \"community-operators-kknb6\" (UID: \"b134de24-b8f7-4727-b32f-82c48b28787c\") " pod="openshift-marketplace/community-operators-kknb6" Mar 20 16:13:33 crc kubenswrapper[4730]: I0320 16:13:33.043889 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b134de24-b8f7-4727-b32f-82c48b28787c-utilities\") pod \"community-operators-kknb6\" (UID: \"b134de24-b8f7-4727-b32f-82c48b28787c\") " pod="openshift-marketplace/community-operators-kknb6" Mar 20 16:13:33 crc kubenswrapper[4730]: I0320 16:13:33.130110 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cf7jm"] Mar 20 16:13:33 crc kubenswrapper[4730]: I0320 16:13:33.145778 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c64hh\" (UniqueName: \"kubernetes.io/projected/b134de24-b8f7-4727-b32f-82c48b28787c-kube-api-access-c64hh\") pod \"community-operators-kknb6\" (UID: \"b134de24-b8f7-4727-b32f-82c48b28787c\") " pod="openshift-marketplace/community-operators-kknb6" Mar 20 16:13:33 crc kubenswrapper[4730]: I0320 16:13:33.145964 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b134de24-b8f7-4727-b32f-82c48b28787c-catalog-content\") pod \"community-operators-kknb6\" (UID: \"b134de24-b8f7-4727-b32f-82c48b28787c\") " pod="openshift-marketplace/community-operators-kknb6" Mar 20 16:13:33 crc kubenswrapper[4730]: I0320 16:13:33.146013 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b134de24-b8f7-4727-b32f-82c48b28787c-utilities\") pod \"community-operators-kknb6\" (UID: \"b134de24-b8f7-4727-b32f-82c48b28787c\") " pod="openshift-marketplace/community-operators-kknb6" Mar 20 16:13:33 crc kubenswrapper[4730]: I0320 16:13:33.146601 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b134de24-b8f7-4727-b32f-82c48b28787c-catalog-content\") pod \"community-operators-kknb6\" (UID: \"b134de24-b8f7-4727-b32f-82c48b28787c\") " pod="openshift-marketplace/community-operators-kknb6" Mar 20 16:13:33 crc kubenswrapper[4730]: I0320 16:13:33.146685 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b134de24-b8f7-4727-b32f-82c48b28787c-utilities\") pod \"community-operators-kknb6\" (UID: \"b134de24-b8f7-4727-b32f-82c48b28787c\") " pod="openshift-marketplace/community-operators-kknb6" Mar 20 16:13:33 crc kubenswrapper[4730]: I0320 16:13:33.172303 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c64hh\" (UniqueName: \"kubernetes.io/projected/b134de24-b8f7-4727-b32f-82c48b28787c-kube-api-access-c64hh\") pod \"community-operators-kknb6\" (UID: \"b134de24-b8f7-4727-b32f-82c48b28787c\") " pod="openshift-marketplace/community-operators-kknb6" Mar 20 16:13:33 crc kubenswrapper[4730]: I0320 16:13:33.237739 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf7jm" event={"ID":"04ed0a2f-539e-4f9e-b106-6012931ca0f2","Type":"ContainerStarted","Data":"b4039747777cef975b706863c807a20fc0909a15358be96231a91bbe7aaf26bc"} Mar 20 16:13:33 crc kubenswrapper[4730]: I0320 16:13:33.274282 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kknb6" Mar 20 16:13:33 crc kubenswrapper[4730]: I0320 16:13:33.536529 4730 scope.go:117] "RemoveContainer" containerID="4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120" Mar 20 16:13:33 crc kubenswrapper[4730]: E0320 16:13:33.537681 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:13:33 crc kubenswrapper[4730]: W0320 16:13:33.783271 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb134de24_b8f7_4727_b32f_82c48b28787c.slice/crio-a676f4331dd426611b34d9f9540e81297862d80665c28c5215dc7f1301e36ad2 WatchSource:0}: Error finding container a676f4331dd426611b34d9f9540e81297862d80665c28c5215dc7f1301e36ad2: Status 404 returned error can't find the container with id a676f4331dd426611b34d9f9540e81297862d80665c28c5215dc7f1301e36ad2 Mar 20 16:13:33 crc kubenswrapper[4730]: I0320 16:13:33.783404 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kknb6"] Mar 20 16:13:34 crc kubenswrapper[4730]: I0320 16:13:34.247634 4730 generic.go:334] "Generic (PLEG): container finished" podID="04ed0a2f-539e-4f9e-b106-6012931ca0f2" containerID="329624f3588bc30d04808da83dac1197f8b8d3102b182217d737f7794a07afae" exitCode=0 Mar 20 16:13:34 crc kubenswrapper[4730]: I0320 16:13:34.247709 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf7jm" event={"ID":"04ed0a2f-539e-4f9e-b106-6012931ca0f2","Type":"ContainerDied","Data":"329624f3588bc30d04808da83dac1197f8b8d3102b182217d737f7794a07afae"} Mar 20 16:13:34 crc kubenswrapper[4730]: I0320 16:13:34.252010 4730 generic.go:334] "Generic (PLEG): container finished" podID="b134de24-b8f7-4727-b32f-82c48b28787c" containerID="9ea9db45b55ad8d6732f20392de0be9c4973c0210527a27e6f448c63676275c8" exitCode=0 Mar 20 16:13:34 crc kubenswrapper[4730]: I0320 16:13:34.252055 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kknb6" event={"ID":"b134de24-b8f7-4727-b32f-82c48b28787c","Type":"ContainerDied","Data":"9ea9db45b55ad8d6732f20392de0be9c4973c0210527a27e6f448c63676275c8"} Mar 20 16:13:34 crc kubenswrapper[4730]: I0320 16:13:34.252088 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kknb6" event={"ID":"b134de24-b8f7-4727-b32f-82c48b28787c","Type":"ContainerStarted","Data":"a676f4331dd426611b34d9f9540e81297862d80665c28c5215dc7f1301e36ad2"} Mar 20 16:13:35 crc kubenswrapper[4730]: I0320 16:13:35.267362 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kknb6" event={"ID":"b134de24-b8f7-4727-b32f-82c48b28787c","Type":"ContainerStarted","Data":"6c36cedbc8afbe16b472f5fbd77a0dd22f8e529944cc3bfadfb6feb49119485f"} Mar 20 16:13:35 crc kubenswrapper[4730]: I0320 16:13:35.277847 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf7jm" event={"ID":"04ed0a2f-539e-4f9e-b106-6012931ca0f2","Type":"ContainerStarted","Data":"478d1155c51193816a900b0e29ac07329d923136e4d365fde22bb334a413a294"} Mar 20 16:13:35 crc kubenswrapper[4730]: I0320 16:13:35.347602 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-smz8t"] Mar 20 16:13:35 crc kubenswrapper[4730]: I0320 16:13:35.350333 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-smz8t" Mar 20 16:13:35 crc kubenswrapper[4730]: I0320 16:13:35.357304 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-smz8t"] Mar 20 16:13:35 crc kubenswrapper[4730]: I0320 16:13:35.392069 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a30ccad-6262-4840-bab0-7c70cce5c54e-utilities\") pod \"certified-operators-smz8t\" (UID: \"8a30ccad-6262-4840-bab0-7c70cce5c54e\") " pod="openshift-marketplace/certified-operators-smz8t" Mar 20 16:13:35 crc kubenswrapper[4730]: I0320 16:13:35.392159 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcg8l\" (UniqueName: \"kubernetes.io/projected/8a30ccad-6262-4840-bab0-7c70cce5c54e-kube-api-access-tcg8l\") pod \"certified-operators-smz8t\" (UID: \"8a30ccad-6262-4840-bab0-7c70cce5c54e\") " pod="openshift-marketplace/certified-operators-smz8t" Mar 20 16:13:35 crc kubenswrapper[4730]: I0320 16:13:35.392258 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a30ccad-6262-4840-bab0-7c70cce5c54e-catalog-content\") pod \"certified-operators-smz8t\" (UID: \"8a30ccad-6262-4840-bab0-7c70cce5c54e\") " pod="openshift-marketplace/certified-operators-smz8t" Mar 20 16:13:35 crc kubenswrapper[4730]: I0320 16:13:35.493882 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a30ccad-6262-4840-bab0-7c70cce5c54e-catalog-content\") pod \"certified-operators-smz8t\" (UID: \"8a30ccad-6262-4840-bab0-7c70cce5c54e\") " pod="openshift-marketplace/certified-operators-smz8t" Mar 20 16:13:35 crc kubenswrapper[4730]: I0320 16:13:35.494370 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a30ccad-6262-4840-bab0-7c70cce5c54e-utilities\") pod \"certified-operators-smz8t\" (UID: \"8a30ccad-6262-4840-bab0-7c70cce5c54e\") " pod="openshift-marketplace/certified-operators-smz8t" Mar 20 16:13:35 crc kubenswrapper[4730]: I0320 16:13:35.494388 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a30ccad-6262-4840-bab0-7c70cce5c54e-catalog-content\") pod \"certified-operators-smz8t\" (UID: \"8a30ccad-6262-4840-bab0-7c70cce5c54e\") " pod="openshift-marketplace/certified-operators-smz8t" Mar 20 16:13:35 crc kubenswrapper[4730]: I0320 16:13:35.494492 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcg8l\" (UniqueName: \"kubernetes.io/projected/8a30ccad-6262-4840-bab0-7c70cce5c54e-kube-api-access-tcg8l\") pod \"certified-operators-smz8t\" (UID: \"8a30ccad-6262-4840-bab0-7c70cce5c54e\") " pod="openshift-marketplace/certified-operators-smz8t" Mar 20 16:13:35 crc kubenswrapper[4730]: I0320 16:13:35.494641 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a30ccad-6262-4840-bab0-7c70cce5c54e-utilities\") pod \"certified-operators-smz8t\" (UID: \"8a30ccad-6262-4840-bab0-7c70cce5c54e\") " pod="openshift-marketplace/certified-operators-smz8t" Mar 20 16:13:35 crc kubenswrapper[4730]: I0320 16:13:35.514206 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcg8l\" (UniqueName: \"kubernetes.io/projected/8a30ccad-6262-4840-bab0-7c70cce5c54e-kube-api-access-tcg8l\") pod \"certified-operators-smz8t\" (UID: \"8a30ccad-6262-4840-bab0-7c70cce5c54e\") " pod="openshift-marketplace/certified-operators-smz8t" Mar 20 16:13:35 crc kubenswrapper[4730]: I0320 16:13:35.716735 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-smz8t" Mar 20 16:13:36 crc kubenswrapper[4730]: I0320 16:13:36.043126 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-smz8t"] Mar 20 16:13:36 crc kubenswrapper[4730]: I0320 16:13:36.287803 4730 generic.go:334] "Generic (PLEG): container finished" podID="b134de24-b8f7-4727-b32f-82c48b28787c" containerID="6c36cedbc8afbe16b472f5fbd77a0dd22f8e529944cc3bfadfb6feb49119485f" exitCode=0 Mar 20 16:13:36 crc kubenswrapper[4730]: I0320 16:13:36.287891 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kknb6" event={"ID":"b134de24-b8f7-4727-b32f-82c48b28787c","Type":"ContainerDied","Data":"6c36cedbc8afbe16b472f5fbd77a0dd22f8e529944cc3bfadfb6feb49119485f"} Mar 20 16:13:36 crc kubenswrapper[4730]: I0320 16:13:36.289648 4730 generic.go:334] "Generic (PLEG): container finished" podID="8a30ccad-6262-4840-bab0-7c70cce5c54e" containerID="695ec40a8d4727ad133ecc0b1c4d07769a4ff2a01a7d067cb8b42c4ff5501acd" exitCode=0 Mar 20 16:13:36 crc kubenswrapper[4730]: I0320 16:13:36.289709 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-smz8t" event={"ID":"8a30ccad-6262-4840-bab0-7c70cce5c54e","Type":"ContainerDied","Data":"695ec40a8d4727ad133ecc0b1c4d07769a4ff2a01a7d067cb8b42c4ff5501acd"} Mar 20 16:13:36 crc kubenswrapper[4730]: I0320 16:13:36.289731 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-smz8t" event={"ID":"8a30ccad-6262-4840-bab0-7c70cce5c54e","Type":"ContainerStarted","Data":"000e4fc2d8a82235e0aca159d20f3965efb17246205b4d4c27e700483ac7437b"} Mar 20 16:13:36 crc kubenswrapper[4730]: I0320 16:13:36.292143 4730 generic.go:334] "Generic (PLEG): container finished" podID="04ed0a2f-539e-4f9e-b106-6012931ca0f2" containerID="478d1155c51193816a900b0e29ac07329d923136e4d365fde22bb334a413a294" exitCode=0 Mar 20 16:13:36 crc kubenswrapper[4730]: I0320 16:13:36.292170 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf7jm" event={"ID":"04ed0a2f-539e-4f9e-b106-6012931ca0f2","Type":"ContainerDied","Data":"478d1155c51193816a900b0e29ac07329d923136e4d365fde22bb334a413a294"} Mar 20 16:13:37 crc kubenswrapper[4730]: I0320 16:13:37.302999 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-smz8t" event={"ID":"8a30ccad-6262-4840-bab0-7c70cce5c54e","Type":"ContainerStarted","Data":"299f8ee9143efa4f898b30f6780be69975c00a31d3fe75022db1b4bdfde1ce2e"} Mar 20 16:13:37 crc kubenswrapper[4730]: I0320 16:13:37.307402 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf7jm" event={"ID":"04ed0a2f-539e-4f9e-b106-6012931ca0f2","Type":"ContainerStarted","Data":"8be767447f54495978c4a0af427a61695bb8c8d39e01bf3e3cf8a755068b39ad"} Mar 20 16:13:37 crc kubenswrapper[4730]: I0320 16:13:37.352224 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cf7jm" podStartSLOduration=2.6395005830000002 podStartE2EDuration="5.352204427s" podCreationTimestamp="2026-03-20 16:13:32 +0000 UTC" firstStartedPulling="2026-03-20 16:13:34.250223694 +0000 UTC m=+2073.463595093" lastFinishedPulling="2026-03-20 16:13:36.962927568 +0000 UTC m=+2076.176298937" observedRunningTime="2026-03-20 16:13:37.348365431 +0000 UTC m=+2076.561736820" watchObservedRunningTime="2026-03-20 16:13:37.352204427 +0000 UTC m=+2076.565575806" Mar 20 16:13:38 crc kubenswrapper[4730]: I0320 16:13:38.317171 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kknb6" event={"ID":"b134de24-b8f7-4727-b32f-82c48b28787c","Type":"ContainerStarted","Data":"df6f84b7aedb1f807fdbfc4478ea27c0ba29ede6e07cc494eb8ff2f4f0f2becc"} Mar 20 16:13:38 crc kubenswrapper[4730]: I0320 16:13:38.319888 4730 generic.go:334] "Generic (PLEG): container finished" podID="8a30ccad-6262-4840-bab0-7c70cce5c54e" containerID="299f8ee9143efa4f898b30f6780be69975c00a31d3fe75022db1b4bdfde1ce2e" exitCode=0 Mar 20 16:13:38 crc kubenswrapper[4730]: I0320 16:13:38.320965 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-smz8t" event={"ID":"8a30ccad-6262-4840-bab0-7c70cce5c54e","Type":"ContainerDied","Data":"299f8ee9143efa4f898b30f6780be69975c00a31d3fe75022db1b4bdfde1ce2e"} Mar 20 16:13:38 crc kubenswrapper[4730]: I0320 16:13:38.341528 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kknb6" podStartSLOduration=3.377736151 podStartE2EDuration="6.341509934s" podCreationTimestamp="2026-03-20 16:13:32 +0000 UTC" firstStartedPulling="2026-03-20 16:13:34.253804103 +0000 UTC m=+2073.467175472" lastFinishedPulling="2026-03-20 16:13:37.217577896 +0000 UTC m=+2076.430949255" observedRunningTime="2026-03-20 16:13:38.33416931 +0000 UTC m=+2077.547540679" watchObservedRunningTime="2026-03-20 16:13:38.341509934 +0000 UTC m=+2077.554881303" Mar 20 16:13:39 crc kubenswrapper[4730]: I0320 16:13:39.329225 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-smz8t" event={"ID":"8a30ccad-6262-4840-bab0-7c70cce5c54e","Type":"ContainerStarted","Data":"4c8596c65726e22fe600b5967470f2bf5449f3246bf3d726509883465d6ebbde"} Mar 20 16:13:39 crc kubenswrapper[4730]: I0320 16:13:39.349448 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-smz8t" podStartSLOduration=1.6895445310000001 podStartE2EDuration="4.349428527s" podCreationTimestamp="2026-03-20 16:13:35 +0000 UTC" firstStartedPulling="2026-03-20 16:13:36.291175739 +0000 UTC m=+2075.504547108" lastFinishedPulling="2026-03-20 16:13:38.951059735 +0000 UTC m=+2078.164431104" observedRunningTime="2026-03-20 16:13:39.346089334 +0000 UTC m=+2078.559460693" watchObservedRunningTime="2026-03-20 16:13:39.349428527 +0000 UTC m=+2078.562799896" Mar 20 16:13:42 crc kubenswrapper[4730]: I0320 16:13:42.672040 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cf7jm" Mar 20 16:13:42 crc kubenswrapper[4730]: I0320 16:13:42.673496 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cf7jm" Mar 20 16:13:42 crc kubenswrapper[4730]: I0320 16:13:42.741791 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cf7jm" Mar 20 16:13:43 crc kubenswrapper[4730]: I0320 16:13:43.274830 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kknb6" Mar 20 16:13:43 crc kubenswrapper[4730]: I0320 16:13:43.274937 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kknb6" Mar 20 16:13:43 crc kubenswrapper[4730]: I0320 16:13:43.332172 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kknb6" Mar 20 16:13:43 crc kubenswrapper[4730]: I0320 16:13:43.407694 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kknb6" Mar 20 16:13:43 crc kubenswrapper[4730]: I0320 16:13:43.421033 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cf7jm" Mar 20 16:13:44 crc kubenswrapper[4730]: I0320 16:13:44.533373 4730 scope.go:117] "RemoveContainer" containerID="4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120" Mar 20 16:13:45 crc kubenswrapper[4730]: I0320 16:13:45.389426 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerStarted","Data":"b7dfce64faa161154e6afb28ae3e7685cc4caead60043a4bb51030ee7d13fb31"} Mar 20 16:13:45 crc kubenswrapper[4730]: I0320 16:13:45.531101 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kknb6"] Mar 20 16:13:45 crc kubenswrapper[4730]: I0320 16:13:45.531676 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kknb6" podUID="b134de24-b8f7-4727-b32f-82c48b28787c" containerName="registry-server" containerID="cri-o://df6f84b7aedb1f807fdbfc4478ea27c0ba29ede6e07cc494eb8ff2f4f0f2becc" gracePeriod=2 Mar 20 16:13:45 crc kubenswrapper[4730]: I0320 16:13:45.716934 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-smz8t" Mar 20 16:13:45 crc kubenswrapper[4730]: I0320 16:13:45.718419 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-smz8t" Mar 20 16:13:45 crc kubenswrapper[4730]: I0320 16:13:45.732142 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cf7jm"] Mar 20 16:13:45 crc kubenswrapper[4730]: I0320 16:13:45.766369 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-smz8t" Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.007441 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kknb6" Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.100099 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b134de24-b8f7-4727-b32f-82c48b28787c-utilities\") pod \"b134de24-b8f7-4727-b32f-82c48b28787c\" (UID: \"b134de24-b8f7-4727-b32f-82c48b28787c\") " Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.100305 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c64hh\" (UniqueName: \"kubernetes.io/projected/b134de24-b8f7-4727-b32f-82c48b28787c-kube-api-access-c64hh\") pod \"b134de24-b8f7-4727-b32f-82c48b28787c\" (UID: \"b134de24-b8f7-4727-b32f-82c48b28787c\") " Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.100354 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b134de24-b8f7-4727-b32f-82c48b28787c-catalog-content\") pod \"b134de24-b8f7-4727-b32f-82c48b28787c\" (UID: \"b134de24-b8f7-4727-b32f-82c48b28787c\") " Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.101346 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b134de24-b8f7-4727-b32f-82c48b28787c-utilities" (OuterVolumeSpecName: "utilities") pod "b134de24-b8f7-4727-b32f-82c48b28787c" (UID: "b134de24-b8f7-4727-b32f-82c48b28787c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.106228 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b134de24-b8f7-4727-b32f-82c48b28787c-kube-api-access-c64hh" (OuterVolumeSpecName: "kube-api-access-c64hh") pod "b134de24-b8f7-4727-b32f-82c48b28787c" (UID: "b134de24-b8f7-4727-b32f-82c48b28787c"). InnerVolumeSpecName "kube-api-access-c64hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.148439 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b134de24-b8f7-4727-b32f-82c48b28787c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b134de24-b8f7-4727-b32f-82c48b28787c" (UID: "b134de24-b8f7-4727-b32f-82c48b28787c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.202324 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b134de24-b8f7-4727-b32f-82c48b28787c-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.202351 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c64hh\" (UniqueName: \"kubernetes.io/projected/b134de24-b8f7-4727-b32f-82c48b28787c-kube-api-access-c64hh\") on node \"crc\" DevicePath \"\"" Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.202362 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b134de24-b8f7-4727-b32f-82c48b28787c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.400116 4730 generic.go:334] "Generic (PLEG): container finished" podID="b134de24-b8f7-4727-b32f-82c48b28787c" containerID="df6f84b7aedb1f807fdbfc4478ea27c0ba29ede6e07cc494eb8ff2f4f0f2becc" exitCode=0 Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.400174 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kknb6" Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.400191 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kknb6" event={"ID":"b134de24-b8f7-4727-b32f-82c48b28787c","Type":"ContainerDied","Data":"df6f84b7aedb1f807fdbfc4478ea27c0ba29ede6e07cc494eb8ff2f4f0f2becc"} Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.400903 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kknb6" event={"ID":"b134de24-b8f7-4727-b32f-82c48b28787c","Type":"ContainerDied","Data":"a676f4331dd426611b34d9f9540e81297862d80665c28c5215dc7f1301e36ad2"} Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.400985 4730 scope.go:117] "RemoveContainer" containerID="df6f84b7aedb1f807fdbfc4478ea27c0ba29ede6e07cc494eb8ff2f4f0f2becc" Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.401654 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cf7jm" podUID="04ed0a2f-539e-4f9e-b106-6012931ca0f2" containerName="registry-server" containerID="cri-o://8be767447f54495978c4a0af427a61695bb8c8d39e01bf3e3cf8a755068b39ad" gracePeriod=2 Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.426508 4730 scope.go:117] "RemoveContainer" containerID="6c36cedbc8afbe16b472f5fbd77a0dd22f8e529944cc3bfadfb6feb49119485f" Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.434558 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kknb6"] Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.443603 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kknb6"] Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.462001 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-smz8t" Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.509483 4730 scope.go:117] "RemoveContainer" containerID="9ea9db45b55ad8d6732f20392de0be9c4973c0210527a27e6f448c63676275c8" Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.551868 4730 scope.go:117] "RemoveContainer" containerID="df6f84b7aedb1f807fdbfc4478ea27c0ba29ede6e07cc494eb8ff2f4f0f2becc" Mar 20 16:13:46 crc kubenswrapper[4730]: E0320 16:13:46.552516 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df6f84b7aedb1f807fdbfc4478ea27c0ba29ede6e07cc494eb8ff2f4f0f2becc\": container with ID starting with df6f84b7aedb1f807fdbfc4478ea27c0ba29ede6e07cc494eb8ff2f4f0f2becc not found: ID does not exist" containerID="df6f84b7aedb1f807fdbfc4478ea27c0ba29ede6e07cc494eb8ff2f4f0f2becc" Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.552549 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df6f84b7aedb1f807fdbfc4478ea27c0ba29ede6e07cc494eb8ff2f4f0f2becc"} err="failed to get container status \"df6f84b7aedb1f807fdbfc4478ea27c0ba29ede6e07cc494eb8ff2f4f0f2becc\": rpc error: code = NotFound desc = could not find container \"df6f84b7aedb1f807fdbfc4478ea27c0ba29ede6e07cc494eb8ff2f4f0f2becc\": container with ID starting with df6f84b7aedb1f807fdbfc4478ea27c0ba29ede6e07cc494eb8ff2f4f0f2becc not found: ID does not exist" Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.552569 4730 scope.go:117] "RemoveContainer" containerID="6c36cedbc8afbe16b472f5fbd77a0dd22f8e529944cc3bfadfb6feb49119485f" Mar 20 16:13:46 crc kubenswrapper[4730]: E0320 16:13:46.552943 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c36cedbc8afbe16b472f5fbd77a0dd22f8e529944cc3bfadfb6feb49119485f\": container with ID starting with 6c36cedbc8afbe16b472f5fbd77a0dd22f8e529944cc3bfadfb6feb49119485f not found: ID does not exist" containerID="6c36cedbc8afbe16b472f5fbd77a0dd22f8e529944cc3bfadfb6feb49119485f" Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.552976 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c36cedbc8afbe16b472f5fbd77a0dd22f8e529944cc3bfadfb6feb49119485f"} err="failed to get container status \"6c36cedbc8afbe16b472f5fbd77a0dd22f8e529944cc3bfadfb6feb49119485f\": rpc error: code = NotFound desc = could not find container \"6c36cedbc8afbe16b472f5fbd77a0dd22f8e529944cc3bfadfb6feb49119485f\": container with ID starting with 6c36cedbc8afbe16b472f5fbd77a0dd22f8e529944cc3bfadfb6feb49119485f not found: ID does not exist" Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.553001 4730 scope.go:117] "RemoveContainer" containerID="9ea9db45b55ad8d6732f20392de0be9c4973c0210527a27e6f448c63676275c8" Mar 20 16:13:46 crc kubenswrapper[4730]: E0320 16:13:46.553261 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ea9db45b55ad8d6732f20392de0be9c4973c0210527a27e6f448c63676275c8\": container with ID starting with 9ea9db45b55ad8d6732f20392de0be9c4973c0210527a27e6f448c63676275c8 not found: ID does not exist" containerID="9ea9db45b55ad8d6732f20392de0be9c4973c0210527a27e6f448c63676275c8" Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.553287 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ea9db45b55ad8d6732f20392de0be9c4973c0210527a27e6f448c63676275c8"} err="failed to get container status \"9ea9db45b55ad8d6732f20392de0be9c4973c0210527a27e6f448c63676275c8\": rpc error: code = NotFound desc = could not find container \"9ea9db45b55ad8d6732f20392de0be9c4973c0210527a27e6f448c63676275c8\": container with ID starting with 9ea9db45b55ad8d6732f20392de0be9c4973c0210527a27e6f448c63676275c8 not found: ID does not exist" Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.823526 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cf7jm" Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.915938 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04ed0a2f-539e-4f9e-b106-6012931ca0f2-catalog-content\") pod \"04ed0a2f-539e-4f9e-b106-6012931ca0f2\" (UID: \"04ed0a2f-539e-4f9e-b106-6012931ca0f2\") " Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.916109 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6t25\" (UniqueName: \"kubernetes.io/projected/04ed0a2f-539e-4f9e-b106-6012931ca0f2-kube-api-access-r6t25\") pod \"04ed0a2f-539e-4f9e-b106-6012931ca0f2\" (UID: \"04ed0a2f-539e-4f9e-b106-6012931ca0f2\") " Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.916156 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04ed0a2f-539e-4f9e-b106-6012931ca0f2-utilities\") pod \"04ed0a2f-539e-4f9e-b106-6012931ca0f2\" (UID: \"04ed0a2f-539e-4f9e-b106-6012931ca0f2\") " Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.917084 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04ed0a2f-539e-4f9e-b106-6012931ca0f2-utilities" (OuterVolumeSpecName: "utilities") pod "04ed0a2f-539e-4f9e-b106-6012931ca0f2" (UID: "04ed0a2f-539e-4f9e-b106-6012931ca0f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.923484 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04ed0a2f-539e-4f9e-b106-6012931ca0f2-kube-api-access-r6t25" (OuterVolumeSpecName: "kube-api-access-r6t25") pod "04ed0a2f-539e-4f9e-b106-6012931ca0f2" (UID: "04ed0a2f-539e-4f9e-b106-6012931ca0f2"). InnerVolumeSpecName "kube-api-access-r6t25". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:13:46 crc kubenswrapper[4730]: I0320 16:13:46.950841 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04ed0a2f-539e-4f9e-b106-6012931ca0f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04ed0a2f-539e-4f9e-b106-6012931ca0f2" (UID: "04ed0a2f-539e-4f9e-b106-6012931ca0f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:13:47 crc kubenswrapper[4730]: I0320 16:13:47.018887 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04ed0a2f-539e-4f9e-b106-6012931ca0f2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 16:13:47 crc kubenswrapper[4730]: I0320 16:13:47.018925 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6t25\" (UniqueName: \"kubernetes.io/projected/04ed0a2f-539e-4f9e-b106-6012931ca0f2-kube-api-access-r6t25\") on node \"crc\" DevicePath \"\"" Mar 20 16:13:47 crc kubenswrapper[4730]: I0320 16:13:47.018936 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04ed0a2f-539e-4f9e-b106-6012931ca0f2-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 16:13:47 crc kubenswrapper[4730]: I0320 16:13:47.409659 4730 generic.go:334] "Generic (PLEG): container finished" podID="04ed0a2f-539e-4f9e-b106-6012931ca0f2" containerID="8be767447f54495978c4a0af427a61695bb8c8d39e01bf3e3cf8a755068b39ad" exitCode=0 Mar 20 16:13:47 crc kubenswrapper[4730]: I0320 16:13:47.409729 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf7jm" event={"ID":"04ed0a2f-539e-4f9e-b106-6012931ca0f2","Type":"ContainerDied","Data":"8be767447f54495978c4a0af427a61695bb8c8d39e01bf3e3cf8a755068b39ad"} Mar 20 16:13:47 crc kubenswrapper[4730]: I0320 16:13:47.410076 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf7jm" event={"ID":"04ed0a2f-539e-4f9e-b106-6012931ca0f2","Type":"ContainerDied","Data":"b4039747777cef975b706863c807a20fc0909a15358be96231a91bbe7aaf26bc"} Mar 20 16:13:47 crc kubenswrapper[4730]: I0320 16:13:47.409763 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cf7jm" Mar 20 16:13:47 crc kubenswrapper[4730]: I0320 16:13:47.410096 4730 scope.go:117] "RemoveContainer" containerID="8be767447f54495978c4a0af427a61695bb8c8d39e01bf3e3cf8a755068b39ad" Mar 20 16:13:47 crc kubenswrapper[4730]: I0320 16:13:47.428550 4730 scope.go:117] "RemoveContainer" containerID="478d1155c51193816a900b0e29ac07329d923136e4d365fde22bb334a413a294" Mar 20 16:13:47 crc kubenswrapper[4730]: I0320 16:13:47.444102 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cf7jm"] Mar 20 16:13:47 crc kubenswrapper[4730]: I0320 16:13:47.451373 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cf7jm"] Mar 20 16:13:47 crc kubenswrapper[4730]: I0320 16:13:47.466435 4730 scope.go:117] "RemoveContainer" containerID="329624f3588bc30d04808da83dac1197f8b8d3102b182217d737f7794a07afae" Mar 20 16:13:47 crc kubenswrapper[4730]: I0320 16:13:47.501269 4730 scope.go:117] "RemoveContainer" containerID="8be767447f54495978c4a0af427a61695bb8c8d39e01bf3e3cf8a755068b39ad" Mar 20 16:13:47 crc kubenswrapper[4730]: E0320 16:13:47.501848 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8be767447f54495978c4a0af427a61695bb8c8d39e01bf3e3cf8a755068b39ad\": container with ID starting with 8be767447f54495978c4a0af427a61695bb8c8d39e01bf3e3cf8a755068b39ad not found: ID does not exist" containerID="8be767447f54495978c4a0af427a61695bb8c8d39e01bf3e3cf8a755068b39ad" Mar 20 16:13:47 crc kubenswrapper[4730]: I0320 16:13:47.501893 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8be767447f54495978c4a0af427a61695bb8c8d39e01bf3e3cf8a755068b39ad"} err="failed to get container status \"8be767447f54495978c4a0af427a61695bb8c8d39e01bf3e3cf8a755068b39ad\": rpc error: code = NotFound desc = could not find container \"8be767447f54495978c4a0af427a61695bb8c8d39e01bf3e3cf8a755068b39ad\": container with ID starting with 8be767447f54495978c4a0af427a61695bb8c8d39e01bf3e3cf8a755068b39ad not found: ID does not exist" Mar 20 16:13:47 crc kubenswrapper[4730]: I0320 16:13:47.501921 4730 scope.go:117] "RemoveContainer" containerID="478d1155c51193816a900b0e29ac07329d923136e4d365fde22bb334a413a294" Mar 20 16:13:47 crc kubenswrapper[4730]: E0320 16:13:47.502181 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"478d1155c51193816a900b0e29ac07329d923136e4d365fde22bb334a413a294\": container with ID starting with 478d1155c51193816a900b0e29ac07329d923136e4d365fde22bb334a413a294 not found: ID does not exist" containerID="478d1155c51193816a900b0e29ac07329d923136e4d365fde22bb334a413a294" Mar 20 16:13:47 crc kubenswrapper[4730]: I0320 16:13:47.502206 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"478d1155c51193816a900b0e29ac07329d923136e4d365fde22bb334a413a294"} err="failed to get container status \"478d1155c51193816a900b0e29ac07329d923136e4d365fde22bb334a413a294\": rpc error: code = NotFound desc = could not find container \"478d1155c51193816a900b0e29ac07329d923136e4d365fde22bb334a413a294\": container with ID starting with 478d1155c51193816a900b0e29ac07329d923136e4d365fde22bb334a413a294 not found: ID does not exist" Mar 20 16:13:47 crc kubenswrapper[4730]: I0320 16:13:47.502220 4730 scope.go:117] "RemoveContainer" containerID="329624f3588bc30d04808da83dac1197f8b8d3102b182217d737f7794a07afae" Mar 20 16:13:47 crc kubenswrapper[4730]: E0320 16:13:47.502464 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"329624f3588bc30d04808da83dac1197f8b8d3102b182217d737f7794a07afae\": container with ID starting with 329624f3588bc30d04808da83dac1197f8b8d3102b182217d737f7794a07afae not found: ID does not exist" containerID="329624f3588bc30d04808da83dac1197f8b8d3102b182217d737f7794a07afae" Mar 20 16:13:47 crc kubenswrapper[4730]: I0320 16:13:47.502498 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"329624f3588bc30d04808da83dac1197f8b8d3102b182217d737f7794a07afae"} err="failed to get container status \"329624f3588bc30d04808da83dac1197f8b8d3102b182217d737f7794a07afae\": rpc error: code = NotFound desc = could not find container \"329624f3588bc30d04808da83dac1197f8b8d3102b182217d737f7794a07afae\": container with ID starting with 329624f3588bc30d04808da83dac1197f8b8d3102b182217d737f7794a07afae not found: ID does not exist" Mar 20 16:13:47 crc kubenswrapper[4730]: I0320 16:13:47.545303 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04ed0a2f-539e-4f9e-b106-6012931ca0f2" path="/var/lib/kubelet/pods/04ed0a2f-539e-4f9e-b106-6012931ca0f2/volumes" Mar 20 16:13:47 crc kubenswrapper[4730]: I0320 16:13:47.546213 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b134de24-b8f7-4727-b32f-82c48b28787c" path="/var/lib/kubelet/pods/b134de24-b8f7-4727-b32f-82c48b28787c/volumes" Mar 20 16:13:48 crc kubenswrapper[4730]: I0320 16:13:48.937538 4730 scope.go:117] "RemoveContainer" containerID="c002961b60e7f958b6eac722566b65c8c9c5ccb02bb7acae14d9879bae50e4f2" Mar 20 16:13:49 crc kubenswrapper[4730]: I0320 16:13:49.929371 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-smz8t"] Mar 20 16:13:49 crc kubenswrapper[4730]: I0320 16:13:49.929651 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-smz8t" podUID="8a30ccad-6262-4840-bab0-7c70cce5c54e" containerName="registry-server" containerID="cri-o://4c8596c65726e22fe600b5967470f2bf5449f3246bf3d726509883465d6ebbde" gracePeriod=2 Mar 20 16:13:50 crc kubenswrapper[4730]: I0320 16:13:50.350580 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-smz8t" Mar 20 16:13:50 crc kubenswrapper[4730]: I0320 16:13:50.438780 4730 generic.go:334] "Generic (PLEG): container finished" podID="8a30ccad-6262-4840-bab0-7c70cce5c54e" containerID="4c8596c65726e22fe600b5967470f2bf5449f3246bf3d726509883465d6ebbde" exitCode=0 Mar 20 16:13:50 crc kubenswrapper[4730]: I0320 16:13:50.438818 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-smz8t" event={"ID":"8a30ccad-6262-4840-bab0-7c70cce5c54e","Type":"ContainerDied","Data":"4c8596c65726e22fe600b5967470f2bf5449f3246bf3d726509883465d6ebbde"} Mar 20 16:13:50 crc kubenswrapper[4730]: I0320 16:13:50.438841 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-smz8t" event={"ID":"8a30ccad-6262-4840-bab0-7c70cce5c54e","Type":"ContainerDied","Data":"000e4fc2d8a82235e0aca159d20f3965efb17246205b4d4c27e700483ac7437b"} Mar 20 16:13:50 crc kubenswrapper[4730]: I0320 16:13:50.438857 4730 scope.go:117] "RemoveContainer" containerID="4c8596c65726e22fe600b5967470f2bf5449f3246bf3d726509883465d6ebbde" Mar 20 16:13:50 crc kubenswrapper[4730]: I0320 16:13:50.438952 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-smz8t" Mar 20 16:13:50 crc kubenswrapper[4730]: I0320 16:13:50.456525 4730 scope.go:117] "RemoveContainer" containerID="299f8ee9143efa4f898b30f6780be69975c00a31d3fe75022db1b4bdfde1ce2e" Mar 20 16:13:50 crc kubenswrapper[4730]: I0320 16:13:50.474430 4730 scope.go:117] "RemoveContainer" containerID="695ec40a8d4727ad133ecc0b1c4d07769a4ff2a01a7d067cb8b42c4ff5501acd" Mar 20 16:13:50 crc kubenswrapper[4730]: I0320 16:13:50.479773 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcg8l\" (UniqueName: \"kubernetes.io/projected/8a30ccad-6262-4840-bab0-7c70cce5c54e-kube-api-access-tcg8l\") pod \"8a30ccad-6262-4840-bab0-7c70cce5c54e\" (UID: \"8a30ccad-6262-4840-bab0-7c70cce5c54e\") " Mar 20 16:13:50 crc kubenswrapper[4730]: I0320 16:13:50.479829 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a30ccad-6262-4840-bab0-7c70cce5c54e-catalog-content\") pod \"8a30ccad-6262-4840-bab0-7c70cce5c54e\" (UID: \"8a30ccad-6262-4840-bab0-7c70cce5c54e\") " Mar 20 16:13:50 crc kubenswrapper[4730]: I0320 16:13:50.479896 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a30ccad-6262-4840-bab0-7c70cce5c54e-utilities\") pod \"8a30ccad-6262-4840-bab0-7c70cce5c54e\" (UID: \"8a30ccad-6262-4840-bab0-7c70cce5c54e\") " Mar 20 16:13:50 crc kubenswrapper[4730]: I0320 16:13:50.481649 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a30ccad-6262-4840-bab0-7c70cce5c54e-utilities" (OuterVolumeSpecName: "utilities") pod "8a30ccad-6262-4840-bab0-7c70cce5c54e" (UID: "8a30ccad-6262-4840-bab0-7c70cce5c54e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:13:50 crc kubenswrapper[4730]: I0320 16:13:50.485984 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a30ccad-6262-4840-bab0-7c70cce5c54e-kube-api-access-tcg8l" (OuterVolumeSpecName: "kube-api-access-tcg8l") pod "8a30ccad-6262-4840-bab0-7c70cce5c54e" (UID: "8a30ccad-6262-4840-bab0-7c70cce5c54e"). InnerVolumeSpecName "kube-api-access-tcg8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:13:50 crc kubenswrapper[4730]: I0320 16:13:50.530568 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a30ccad-6262-4840-bab0-7c70cce5c54e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a30ccad-6262-4840-bab0-7c70cce5c54e" (UID: "8a30ccad-6262-4840-bab0-7c70cce5c54e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:13:50 crc kubenswrapper[4730]: I0320 16:13:50.568287 4730 scope.go:117] "RemoveContainer" containerID="4c8596c65726e22fe600b5967470f2bf5449f3246bf3d726509883465d6ebbde" Mar 20 16:13:50 crc kubenswrapper[4730]: E0320 16:13:50.568772 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c8596c65726e22fe600b5967470f2bf5449f3246bf3d726509883465d6ebbde\": container with ID starting with 4c8596c65726e22fe600b5967470f2bf5449f3246bf3d726509883465d6ebbde not found: ID does not exist" containerID="4c8596c65726e22fe600b5967470f2bf5449f3246bf3d726509883465d6ebbde" Mar 20 16:13:50 crc kubenswrapper[4730]: I0320 16:13:50.568824 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c8596c65726e22fe600b5967470f2bf5449f3246bf3d726509883465d6ebbde"} err="failed to get container status \"4c8596c65726e22fe600b5967470f2bf5449f3246bf3d726509883465d6ebbde\": rpc error: code = NotFound desc = could not find container \"4c8596c65726e22fe600b5967470f2bf5449f3246bf3d726509883465d6ebbde\": container with ID starting with 4c8596c65726e22fe600b5967470f2bf5449f3246bf3d726509883465d6ebbde not found: ID does not exist" Mar 20 16:13:50 crc kubenswrapper[4730]: I0320 16:13:50.568852 4730 scope.go:117] "RemoveContainer" containerID="299f8ee9143efa4f898b30f6780be69975c00a31d3fe75022db1b4bdfde1ce2e" Mar 20 16:13:50 crc kubenswrapper[4730]: E0320 16:13:50.569590 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"299f8ee9143efa4f898b30f6780be69975c00a31d3fe75022db1b4bdfde1ce2e\": container with ID starting with 299f8ee9143efa4f898b30f6780be69975c00a31d3fe75022db1b4bdfde1ce2e not found: ID does not exist" containerID="299f8ee9143efa4f898b30f6780be69975c00a31d3fe75022db1b4bdfde1ce2e" Mar 20 16:13:50 crc kubenswrapper[4730]: I0320 16:13:50.569637 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"299f8ee9143efa4f898b30f6780be69975c00a31d3fe75022db1b4bdfde1ce2e"} err="failed to get container status \"299f8ee9143efa4f898b30f6780be69975c00a31d3fe75022db1b4bdfde1ce2e\": rpc error: code = NotFound desc = could not find container \"299f8ee9143efa4f898b30f6780be69975c00a31d3fe75022db1b4bdfde1ce2e\": container with ID starting with 299f8ee9143efa4f898b30f6780be69975c00a31d3fe75022db1b4bdfde1ce2e not found: ID does not exist" Mar 20 16:13:50 crc kubenswrapper[4730]: I0320 16:13:50.569666 4730 scope.go:117] "RemoveContainer" containerID="695ec40a8d4727ad133ecc0b1c4d07769a4ff2a01a7d067cb8b42c4ff5501acd" Mar 20 16:13:50 crc kubenswrapper[4730]: E0320 16:13:50.569911 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"695ec40a8d4727ad133ecc0b1c4d07769a4ff2a01a7d067cb8b42c4ff5501acd\": container with ID starting with 695ec40a8d4727ad133ecc0b1c4d07769a4ff2a01a7d067cb8b42c4ff5501acd not found: ID does not exist" containerID="695ec40a8d4727ad133ecc0b1c4d07769a4ff2a01a7d067cb8b42c4ff5501acd" Mar 20 16:13:50 crc kubenswrapper[4730]: I0320 16:13:50.569938 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"695ec40a8d4727ad133ecc0b1c4d07769a4ff2a01a7d067cb8b42c4ff5501acd"} err="failed to get container status \"695ec40a8d4727ad133ecc0b1c4d07769a4ff2a01a7d067cb8b42c4ff5501acd\": rpc error: code = NotFound desc = could not find container \"695ec40a8d4727ad133ecc0b1c4d07769a4ff2a01a7d067cb8b42c4ff5501acd\": container with ID starting with 695ec40a8d4727ad133ecc0b1c4d07769a4ff2a01a7d067cb8b42c4ff5501acd not found: ID does not exist" Mar 20 16:13:50 crc kubenswrapper[4730]: I0320 16:13:50.582889 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcg8l\" (UniqueName: \"kubernetes.io/projected/8a30ccad-6262-4840-bab0-7c70cce5c54e-kube-api-access-tcg8l\") on node \"crc\" DevicePath \"\"" Mar 20 16:13:50 crc kubenswrapper[4730]: I0320 16:13:50.582922 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a30ccad-6262-4840-bab0-7c70cce5c54e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 16:13:50 crc kubenswrapper[4730]: I0320 16:13:50.582934 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a30ccad-6262-4840-bab0-7c70cce5c54e-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 16:13:50 crc kubenswrapper[4730]: I0320 16:13:50.775901 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-smz8t"] Mar 20 16:13:50 crc kubenswrapper[4730]: I0320 16:13:50.785076 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-smz8t"] Mar 20 16:13:51 crc kubenswrapper[4730]: I0320 16:13:51.546044 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a30ccad-6262-4840-bab0-7c70cce5c54e" path="/var/lib/kubelet/pods/8a30ccad-6262-4840-bab0-7c70cce5c54e/volumes" Mar 20 16:13:56 crc kubenswrapper[4730]: I0320 16:13:56.518680 4730 generic.go:334] "Generic (PLEG): container finished" podID="a8c27e63-ebf9-45ff-87b2-4782b20e19e3" containerID="f5e9f1ce126d736eed319722f72b288bfa7b28dcea13acceb2dec38e7da569bc" exitCode=0 Mar 20 16:13:56 crc kubenswrapper[4730]: I0320 16:13:56.518824 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m89bj" event={"ID":"a8c27e63-ebf9-45ff-87b2-4782b20e19e3","Type":"ContainerDied","Data":"f5e9f1ce126d736eed319722f72b288bfa7b28dcea13acceb2dec38e7da569bc"} Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.051120 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m89bj" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.132129 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5q7k\" (UniqueName: \"kubernetes.io/projected/a8c27e63-ebf9-45ff-87b2-4782b20e19e3-kube-api-access-k5q7k\") pod \"a8c27e63-ebf9-45ff-87b2-4782b20e19e3\" (UID: \"a8c27e63-ebf9-45ff-87b2-4782b20e19e3\") " Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.132285 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8c27e63-ebf9-45ff-87b2-4782b20e19e3-inventory\") pod \"a8c27e63-ebf9-45ff-87b2-4782b20e19e3\" (UID: \"a8c27e63-ebf9-45ff-87b2-4782b20e19e3\") " Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.132375 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8c27e63-ebf9-45ff-87b2-4782b20e19e3-ssh-key-openstack-edpm-ipam\") pod \"a8c27e63-ebf9-45ff-87b2-4782b20e19e3\" (UID: \"a8c27e63-ebf9-45ff-87b2-4782b20e19e3\") " Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.138569 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8c27e63-ebf9-45ff-87b2-4782b20e19e3-kube-api-access-k5q7k" (OuterVolumeSpecName: "kube-api-access-k5q7k") pod "a8c27e63-ebf9-45ff-87b2-4782b20e19e3" (UID: "a8c27e63-ebf9-45ff-87b2-4782b20e19e3"). InnerVolumeSpecName "kube-api-access-k5q7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.169345 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8c27e63-ebf9-45ff-87b2-4782b20e19e3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a8c27e63-ebf9-45ff-87b2-4782b20e19e3" (UID: "a8c27e63-ebf9-45ff-87b2-4782b20e19e3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.183546 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8c27e63-ebf9-45ff-87b2-4782b20e19e3-inventory" (OuterVolumeSpecName: "inventory") pod "a8c27e63-ebf9-45ff-87b2-4782b20e19e3" (UID: "a8c27e63-ebf9-45ff-87b2-4782b20e19e3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.235655 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5q7k\" (UniqueName: \"kubernetes.io/projected/a8c27e63-ebf9-45ff-87b2-4782b20e19e3-kube-api-access-k5q7k\") on node \"crc\" DevicePath \"\"" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.235709 4730 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8c27e63-ebf9-45ff-87b2-4782b20e19e3-inventory\") on node \"crc\" DevicePath \"\"" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.235728 4730 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8c27e63-ebf9-45ff-87b2-4782b20e19e3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.539351 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m89bj" event={"ID":"a8c27e63-ebf9-45ff-87b2-4782b20e19e3","Type":"ContainerDied","Data":"01d39be61543c09265f967d5b7ada582d06aa7ac17db1c635eb6ef1c127d2c54"} Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.539663 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01d39be61543c09265f967d5b7ada582d06aa7ac17db1c635eb6ef1c127d2c54" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.539465 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m89bj" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.639890 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4mmgp"] Mar 20 16:13:58 crc kubenswrapper[4730]: E0320 16:13:58.640372 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ed0a2f-539e-4f9e-b106-6012931ca0f2" containerName="extract-utilities" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.640391 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ed0a2f-539e-4f9e-b106-6012931ca0f2" containerName="extract-utilities" Mar 20 16:13:58 crc kubenswrapper[4730]: E0320 16:13:58.640412 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b134de24-b8f7-4727-b32f-82c48b28787c" containerName="extract-content" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.640421 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="b134de24-b8f7-4727-b32f-82c48b28787c" containerName="extract-content" Mar 20 16:13:58 crc kubenswrapper[4730]: E0320 16:13:58.640437 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b134de24-b8f7-4727-b32f-82c48b28787c" containerName="extract-utilities" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.640446 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="b134de24-b8f7-4727-b32f-82c48b28787c" containerName="extract-utilities" Mar 20 16:13:58 crc kubenswrapper[4730]: E0320 16:13:58.640462 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a30ccad-6262-4840-bab0-7c70cce5c54e" containerName="extract-utilities" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.640470 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a30ccad-6262-4840-bab0-7c70cce5c54e" containerName="extract-utilities" Mar 20 16:13:58 crc kubenswrapper[4730]: E0320 16:13:58.640494 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8c27e63-ebf9-45ff-87b2-4782b20e19e3" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.640510 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8c27e63-ebf9-45ff-87b2-4782b20e19e3" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 20 16:13:58 crc kubenswrapper[4730]: E0320 16:13:58.640530 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ed0a2f-539e-4f9e-b106-6012931ca0f2" containerName="extract-content" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.640539 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ed0a2f-539e-4f9e-b106-6012931ca0f2" containerName="extract-content" Mar 20 16:13:58 crc kubenswrapper[4730]: E0320 16:13:58.640560 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a30ccad-6262-4840-bab0-7c70cce5c54e" containerName="extract-content" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.640568 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a30ccad-6262-4840-bab0-7c70cce5c54e" containerName="extract-content" Mar 20 16:13:58 crc kubenswrapper[4730]: E0320 16:13:58.640582 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b134de24-b8f7-4727-b32f-82c48b28787c" containerName="registry-server" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.640590 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="b134de24-b8f7-4727-b32f-82c48b28787c" containerName="registry-server" Mar 20 16:13:58 crc kubenswrapper[4730]: E0320 16:13:58.640606 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a30ccad-6262-4840-bab0-7c70cce5c54e" containerName="registry-server" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.640614 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a30ccad-6262-4840-bab0-7c70cce5c54e" containerName="registry-server" Mar 20 16:13:58 crc kubenswrapper[4730]: E0320 16:13:58.640631 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ed0a2f-539e-4f9e-b106-6012931ca0f2" containerName="registry-server" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.640640 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ed0a2f-539e-4f9e-b106-6012931ca0f2" containerName="registry-server" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.640913 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="04ed0a2f-539e-4f9e-b106-6012931ca0f2" containerName="registry-server" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.640940 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a30ccad-6262-4840-bab0-7c70cce5c54e" containerName="registry-server" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.640953 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8c27e63-ebf9-45ff-87b2-4782b20e19e3" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.640967 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="b134de24-b8f7-4727-b32f-82c48b28787c" containerName="registry-server" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.641812 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4mmgp" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.643781 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.644640 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.651565 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.651724 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvsxx" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.661191 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4mmgp"] Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.745307 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg2zr\" (UniqueName: \"kubernetes.io/projected/eebb2eb5-4553-41b0-85e6-81e470576d50-kube-api-access-zg2zr\") pod \"ssh-known-hosts-edpm-deployment-4mmgp\" (UID: \"eebb2eb5-4553-41b0-85e6-81e470576d50\") " pod="openstack/ssh-known-hosts-edpm-deployment-4mmgp" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.745492 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eebb2eb5-4553-41b0-85e6-81e470576d50-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-4mmgp\" (UID: \"eebb2eb5-4553-41b0-85e6-81e470576d50\") " pod="openstack/ssh-known-hosts-edpm-deployment-4mmgp" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.745609 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/eebb2eb5-4553-41b0-85e6-81e470576d50-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-4mmgp\" (UID: \"eebb2eb5-4553-41b0-85e6-81e470576d50\") " pod="openstack/ssh-known-hosts-edpm-deployment-4mmgp" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.847362 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eebb2eb5-4553-41b0-85e6-81e470576d50-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-4mmgp\" (UID: \"eebb2eb5-4553-41b0-85e6-81e470576d50\") " pod="openstack/ssh-known-hosts-edpm-deployment-4mmgp" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.847422 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/eebb2eb5-4553-41b0-85e6-81e470576d50-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-4mmgp\" (UID: \"eebb2eb5-4553-41b0-85e6-81e470576d50\") " pod="openstack/ssh-known-hosts-edpm-deployment-4mmgp" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.847647 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg2zr\" (UniqueName: \"kubernetes.io/projected/eebb2eb5-4553-41b0-85e6-81e470576d50-kube-api-access-zg2zr\") pod \"ssh-known-hosts-edpm-deployment-4mmgp\" (UID: \"eebb2eb5-4553-41b0-85e6-81e470576d50\") " pod="openstack/ssh-known-hosts-edpm-deployment-4mmgp" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.853088 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/eebb2eb5-4553-41b0-85e6-81e470576d50-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-4mmgp\" (UID: \"eebb2eb5-4553-41b0-85e6-81e470576d50\") " pod="openstack/ssh-known-hosts-edpm-deployment-4mmgp" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.853136 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eebb2eb5-4553-41b0-85e6-81e470576d50-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-4mmgp\" (UID: \"eebb2eb5-4553-41b0-85e6-81e470576d50\") " pod="openstack/ssh-known-hosts-edpm-deployment-4mmgp" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.865609 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg2zr\" (UniqueName: \"kubernetes.io/projected/eebb2eb5-4553-41b0-85e6-81e470576d50-kube-api-access-zg2zr\") pod \"ssh-known-hosts-edpm-deployment-4mmgp\" (UID: \"eebb2eb5-4553-41b0-85e6-81e470576d50\") " pod="openstack/ssh-known-hosts-edpm-deployment-4mmgp" Mar 20 16:13:58 crc kubenswrapper[4730]: I0320 16:13:58.966650 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4mmgp" Mar 20 16:13:59 crc kubenswrapper[4730]: I0320 16:13:59.487431 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4mmgp"] Mar 20 16:13:59 crc kubenswrapper[4730]: I0320 16:13:59.549213 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4mmgp" event={"ID":"eebb2eb5-4553-41b0-85e6-81e470576d50","Type":"ContainerStarted","Data":"d15eba2a3cb4372e1d50223b264561216531c96c5a30cbb5b72ec2027f141f11"} Mar 20 16:14:00 crc kubenswrapper[4730]: I0320 16:14:00.134781 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567054-glf2f"] Mar 20 16:14:00 crc kubenswrapper[4730]: I0320 16:14:00.136547 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567054-glf2f" Mar 20 16:14:00 crc kubenswrapper[4730]: I0320 16:14:00.138449 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:14:00 crc kubenswrapper[4730]: I0320 16:14:00.141982 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:14:00 crc kubenswrapper[4730]: I0320 16:14:00.144293 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:14:00 crc kubenswrapper[4730]: I0320 16:14:00.150787 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567054-glf2f"] Mar 20 16:14:00 crc kubenswrapper[4730]: I0320 16:14:00.200025 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq6kq\" (UniqueName: \"kubernetes.io/projected/d62c2430-2f2b-49f0-848a-015a72d04090-kube-api-access-zq6kq\") pod \"auto-csr-approver-29567054-glf2f\" (UID: \"d62c2430-2f2b-49f0-848a-015a72d04090\") " pod="openshift-infra/auto-csr-approver-29567054-glf2f" Mar 20 16:14:00 crc kubenswrapper[4730]: I0320 16:14:00.302286 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq6kq\" (UniqueName: \"kubernetes.io/projected/d62c2430-2f2b-49f0-848a-015a72d04090-kube-api-access-zq6kq\") pod \"auto-csr-approver-29567054-glf2f\" (UID: \"d62c2430-2f2b-49f0-848a-015a72d04090\") " pod="openshift-infra/auto-csr-approver-29567054-glf2f" Mar 20 16:14:00 crc kubenswrapper[4730]: I0320 16:14:00.323673 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq6kq\" (UniqueName: \"kubernetes.io/projected/d62c2430-2f2b-49f0-848a-015a72d04090-kube-api-access-zq6kq\") pod \"auto-csr-approver-29567054-glf2f\" (UID: \"d62c2430-2f2b-49f0-848a-015a72d04090\") " pod="openshift-infra/auto-csr-approver-29567054-glf2f" Mar 20 16:14:00 crc kubenswrapper[4730]: I0320 16:14:00.513940 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567054-glf2f" Mar 20 16:14:00 crc kubenswrapper[4730]: I0320 16:14:00.941738 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567054-glf2f"] Mar 20 16:14:00 crc kubenswrapper[4730]: W0320 16:14:00.958166 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd62c2430_2f2b_49f0_848a_015a72d04090.slice/crio-d0a691de51bc653167d6747ca95afa97af85f0a35af0278e824af53532069c23 WatchSource:0}: Error finding container d0a691de51bc653167d6747ca95afa97af85f0a35af0278e824af53532069c23: Status 404 returned error can't find the container with id d0a691de51bc653167d6747ca95afa97af85f0a35af0278e824af53532069c23 Mar 20 16:14:01 crc kubenswrapper[4730]: I0320 16:14:01.570125 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4mmgp" event={"ID":"eebb2eb5-4553-41b0-85e6-81e470576d50","Type":"ContainerStarted","Data":"cf9fbc8e58f88b901eea9f003d08be59e29a519886cb230e8c1e76d47a42f08e"} Mar 20 16:14:01 crc kubenswrapper[4730]: I0320 16:14:01.571288 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567054-glf2f" event={"ID":"d62c2430-2f2b-49f0-848a-015a72d04090","Type":"ContainerStarted","Data":"d0a691de51bc653167d6747ca95afa97af85f0a35af0278e824af53532069c23"} Mar 20 16:14:01 crc kubenswrapper[4730]: I0320 16:14:01.600406 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-4mmgp" podStartSLOduration=2.38354246 podStartE2EDuration="3.60038819s" podCreationTimestamp="2026-03-20 16:13:58 +0000 UTC" firstStartedPulling="2026-03-20 16:13:59.483803064 +0000 UTC m=+2098.697174433" lastFinishedPulling="2026-03-20 16:14:00.700648794 +0000 UTC m=+2099.914020163" observedRunningTime="2026-03-20 16:14:01.592609564 +0000 UTC m=+2100.805980943" watchObservedRunningTime="2026-03-20 16:14:01.60038819 +0000 UTC m=+2100.813759559" Mar 20 16:14:03 crc kubenswrapper[4730]: I0320 16:14:03.600336 4730 generic.go:334] "Generic (PLEG): container finished" podID="d62c2430-2f2b-49f0-848a-015a72d04090" containerID="88155d5b3d3f84b9a79ccb85b9d478d5415c51d7944f4bb78a548434ba4fb653" exitCode=0 Mar 20 16:14:03 crc kubenswrapper[4730]: I0320 16:14:03.600440 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567054-glf2f" event={"ID":"d62c2430-2f2b-49f0-848a-015a72d04090","Type":"ContainerDied","Data":"88155d5b3d3f84b9a79ccb85b9d478d5415c51d7944f4bb78a548434ba4fb653"} Mar 20 16:14:05 crc kubenswrapper[4730]: I0320 16:14:05.007569 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567054-glf2f" Mar 20 16:14:05 crc kubenswrapper[4730]: I0320 16:14:05.098812 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq6kq\" (UniqueName: \"kubernetes.io/projected/d62c2430-2f2b-49f0-848a-015a72d04090-kube-api-access-zq6kq\") pod \"d62c2430-2f2b-49f0-848a-015a72d04090\" (UID: \"d62c2430-2f2b-49f0-848a-015a72d04090\") " Mar 20 16:14:05 crc kubenswrapper[4730]: I0320 16:14:05.104069 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d62c2430-2f2b-49f0-848a-015a72d04090-kube-api-access-zq6kq" (OuterVolumeSpecName: "kube-api-access-zq6kq") pod "d62c2430-2f2b-49f0-848a-015a72d04090" (UID: "d62c2430-2f2b-49f0-848a-015a72d04090"). InnerVolumeSpecName "kube-api-access-zq6kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:14:05 crc kubenswrapper[4730]: I0320 16:14:05.202024 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq6kq\" (UniqueName: \"kubernetes.io/projected/d62c2430-2f2b-49f0-848a-015a72d04090-kube-api-access-zq6kq\") on node \"crc\" DevicePath \"\"" Mar 20 16:14:05 crc kubenswrapper[4730]: I0320 16:14:05.624055 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567054-glf2f" event={"ID":"d62c2430-2f2b-49f0-848a-015a72d04090","Type":"ContainerDied","Data":"d0a691de51bc653167d6747ca95afa97af85f0a35af0278e824af53532069c23"} Mar 20 16:14:05 crc kubenswrapper[4730]: I0320 16:14:05.624109 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0a691de51bc653167d6747ca95afa97af85f0a35af0278e824af53532069c23" Mar 20 16:14:05 crc kubenswrapper[4730]: I0320 16:14:05.624109 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567054-glf2f" Mar 20 16:14:06 crc kubenswrapper[4730]: I0320 16:14:06.083759 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567048-nhd7d"] Mar 20 16:14:06 crc kubenswrapper[4730]: I0320 16:14:06.092090 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567048-nhd7d"] Mar 20 16:14:07 crc kubenswrapper[4730]: I0320 16:14:07.546508 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28bea13e-dd2a-4ecf-9182-cc639a47c75f" path="/var/lib/kubelet/pods/28bea13e-dd2a-4ecf-9182-cc639a47c75f/volumes" Mar 20 16:14:07 crc kubenswrapper[4730]: I0320 16:14:07.650034 4730 generic.go:334] "Generic (PLEG): container finished" podID="eebb2eb5-4553-41b0-85e6-81e470576d50" containerID="cf9fbc8e58f88b901eea9f003d08be59e29a519886cb230e8c1e76d47a42f08e" exitCode=0 Mar 20 16:14:07 crc kubenswrapper[4730]: I0320 16:14:07.650094 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4mmgp" event={"ID":"eebb2eb5-4553-41b0-85e6-81e470576d50","Type":"ContainerDied","Data":"cf9fbc8e58f88b901eea9f003d08be59e29a519886cb230e8c1e76d47a42f08e"} Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.050669 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4mmgp" Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.184423 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg2zr\" (UniqueName: \"kubernetes.io/projected/eebb2eb5-4553-41b0-85e6-81e470576d50-kube-api-access-zg2zr\") pod \"eebb2eb5-4553-41b0-85e6-81e470576d50\" (UID: \"eebb2eb5-4553-41b0-85e6-81e470576d50\") " Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.184962 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/eebb2eb5-4553-41b0-85e6-81e470576d50-inventory-0\") pod \"eebb2eb5-4553-41b0-85e6-81e470576d50\" (UID: \"eebb2eb5-4553-41b0-85e6-81e470576d50\") " Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.185040 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eebb2eb5-4553-41b0-85e6-81e470576d50-ssh-key-openstack-edpm-ipam\") pod \"eebb2eb5-4553-41b0-85e6-81e470576d50\" (UID: \"eebb2eb5-4553-41b0-85e6-81e470576d50\") " Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.191906 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eebb2eb5-4553-41b0-85e6-81e470576d50-kube-api-access-zg2zr" (OuterVolumeSpecName: "kube-api-access-zg2zr") pod "eebb2eb5-4553-41b0-85e6-81e470576d50" (UID: "eebb2eb5-4553-41b0-85e6-81e470576d50"). InnerVolumeSpecName "kube-api-access-zg2zr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.211760 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eebb2eb5-4553-41b0-85e6-81e470576d50-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "eebb2eb5-4553-41b0-85e6-81e470576d50" (UID: "eebb2eb5-4553-41b0-85e6-81e470576d50"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.213018 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eebb2eb5-4553-41b0-85e6-81e470576d50-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "eebb2eb5-4553-41b0-85e6-81e470576d50" (UID: "eebb2eb5-4553-41b0-85e6-81e470576d50"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.286652 4730 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eebb2eb5-4553-41b0-85e6-81e470576d50-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.286810 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg2zr\" (UniqueName: \"kubernetes.io/projected/eebb2eb5-4553-41b0-85e6-81e470576d50-kube-api-access-zg2zr\") on node \"crc\" DevicePath \"\"" Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.286888 4730 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/eebb2eb5-4553-41b0-85e6-81e470576d50-inventory-0\") on node \"crc\" DevicePath \"\"" Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.667611 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4mmgp" event={"ID":"eebb2eb5-4553-41b0-85e6-81e470576d50","Type":"ContainerDied","Data":"d15eba2a3cb4372e1d50223b264561216531c96c5a30cbb5b72ec2027f141f11"} Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.667651 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d15eba2a3cb4372e1d50223b264561216531c96c5a30cbb5b72ec2027f141f11" Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.667731 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4mmgp" Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.756179 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ckzj9"] Mar 20 16:14:09 crc kubenswrapper[4730]: E0320 16:14:09.756861 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eebb2eb5-4553-41b0-85e6-81e470576d50" containerName="ssh-known-hosts-edpm-deployment" Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.756897 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="eebb2eb5-4553-41b0-85e6-81e470576d50" containerName="ssh-known-hosts-edpm-deployment" Mar 20 16:14:09 crc kubenswrapper[4730]: E0320 16:14:09.756913 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d62c2430-2f2b-49f0-848a-015a72d04090" containerName="oc" Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.756922 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d62c2430-2f2b-49f0-848a-015a72d04090" containerName="oc" Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.757157 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="d62c2430-2f2b-49f0-848a-015a72d04090" containerName="oc" Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.757187 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="eebb2eb5-4553-41b0-85e6-81e470576d50" containerName="ssh-known-hosts-edpm-deployment" Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.758141 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ckzj9" Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.760362 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.760641 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.761141 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvsxx" Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.761196 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.768227 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ckzj9"] Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.805115 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64xpg\" (UniqueName: \"kubernetes.io/projected/0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c-kube-api-access-64xpg\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ckzj9\" (UID: \"0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ckzj9" Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.805274 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ckzj9\" (UID: \"0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ckzj9" Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.805322 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ckzj9\" (UID: \"0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ckzj9" Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.907224 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ckzj9\" (UID: \"0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ckzj9" Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.907322 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ckzj9\" (UID: \"0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ckzj9" Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.907411 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64xpg\" (UniqueName: \"kubernetes.io/projected/0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c-kube-api-access-64xpg\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ckzj9\" (UID: \"0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ckzj9" Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.912229 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ckzj9\" (UID: \"0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ckzj9" Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.914355 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ckzj9\" (UID: \"0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ckzj9" Mar 20 16:14:09 crc kubenswrapper[4730]: I0320 16:14:09.936716 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64xpg\" (UniqueName: \"kubernetes.io/projected/0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c-kube-api-access-64xpg\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ckzj9\" (UID: \"0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ckzj9" Mar 20 16:14:10 crc kubenswrapper[4730]: I0320 16:14:10.086181 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ckzj9" Mar 20 16:14:10 crc kubenswrapper[4730]: I0320 16:14:10.626660 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ckzj9"] Mar 20 16:14:10 crc kubenswrapper[4730]: I0320 16:14:10.676726 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ckzj9" event={"ID":"0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c","Type":"ContainerStarted","Data":"fb1b51d3ab50918406f030cebcd373b0e4840f9c2af0258d10e23a73e1e02bde"} Mar 20 16:14:11 crc kubenswrapper[4730]: I0320 16:14:11.693696 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ckzj9" event={"ID":"0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c","Type":"ContainerStarted","Data":"d3df01e18681838ce7ec48f5f7fe992384e53dd9a8111ca0991db497dc1cb894"} Mar 20 16:14:11 crc kubenswrapper[4730]: I0320 16:14:11.720331 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ckzj9" podStartSLOduration=2.302805272 podStartE2EDuration="2.720315076s" podCreationTimestamp="2026-03-20 16:14:09 +0000 UTC" firstStartedPulling="2026-03-20 16:14:10.635579258 +0000 UTC m=+2109.848950627" lastFinishedPulling="2026-03-20 16:14:11.053089042 +0000 UTC m=+2110.266460431" observedRunningTime="2026-03-20 16:14:11.715187594 +0000 UTC m=+2110.928558983" watchObservedRunningTime="2026-03-20 16:14:11.720315076 +0000 UTC m=+2110.933686445" Mar 20 16:14:18 crc kubenswrapper[4730]: I0320 16:14:18.767828 4730 generic.go:334] "Generic (PLEG): container finished" podID="0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c" containerID="d3df01e18681838ce7ec48f5f7fe992384e53dd9a8111ca0991db497dc1cb894" exitCode=0 Mar 20 16:14:18 crc kubenswrapper[4730]: I0320 16:14:18.767997 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ckzj9" event={"ID":"0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c","Type":"ContainerDied","Data":"d3df01e18681838ce7ec48f5f7fe992384e53dd9a8111ca0991db497dc1cb894"} Mar 20 16:14:20 crc kubenswrapper[4730]: I0320 16:14:20.171923 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ckzj9" Mar 20 16:14:20 crc kubenswrapper[4730]: I0320 16:14:20.241005 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c-inventory\") pod \"0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c\" (UID: \"0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c\") " Mar 20 16:14:20 crc kubenswrapper[4730]: I0320 16:14:20.241114 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64xpg\" (UniqueName: \"kubernetes.io/projected/0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c-kube-api-access-64xpg\") pod \"0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c\" (UID: \"0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c\") " Mar 20 16:14:20 crc kubenswrapper[4730]: I0320 16:14:20.241217 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c-ssh-key-openstack-edpm-ipam\") pod \"0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c\" (UID: \"0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c\") " Mar 20 16:14:20 crc kubenswrapper[4730]: I0320 16:14:20.247493 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c-kube-api-access-64xpg" (OuterVolumeSpecName: "kube-api-access-64xpg") pod "0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c" (UID: "0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c"). InnerVolumeSpecName "kube-api-access-64xpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:14:20 crc kubenswrapper[4730]: I0320 16:14:20.268808 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c" (UID: "0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:14:20 crc kubenswrapper[4730]: I0320 16:14:20.270922 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c-inventory" (OuterVolumeSpecName: "inventory") pod "0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c" (UID: "0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:14:20 crc kubenswrapper[4730]: I0320 16:14:20.343619 4730 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 20 16:14:20 crc kubenswrapper[4730]: I0320 16:14:20.343671 4730 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c-inventory\") on node \"crc\" DevicePath \"\"" Mar 20 16:14:20 crc kubenswrapper[4730]: I0320 16:14:20.343700 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64xpg\" (UniqueName: \"kubernetes.io/projected/0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c-kube-api-access-64xpg\") on node \"crc\" DevicePath \"\"" Mar 20 16:14:20 crc kubenswrapper[4730]: I0320 16:14:20.787390 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ckzj9" event={"ID":"0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c","Type":"ContainerDied","Data":"fb1b51d3ab50918406f030cebcd373b0e4840f9c2af0258d10e23a73e1e02bde"} Mar 20 16:14:20 crc kubenswrapper[4730]: I0320 16:14:20.787441 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb1b51d3ab50918406f030cebcd373b0e4840f9c2af0258d10e23a73e1e02bde" Mar 20 16:14:20 crc kubenswrapper[4730]: I0320 16:14:20.787459 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ckzj9" Mar 20 16:14:20 crc kubenswrapper[4730]: I0320 16:14:20.887867 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s"] Mar 20 16:14:20 crc kubenswrapper[4730]: E0320 16:14:20.889597 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 20 16:14:20 crc kubenswrapper[4730]: I0320 16:14:20.889634 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 20 16:14:20 crc kubenswrapper[4730]: I0320 16:14:20.890377 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 20 16:14:20 crc kubenswrapper[4730]: I0320 16:14:20.891487 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s" Mar 20 16:14:20 crc kubenswrapper[4730]: I0320 16:14:20.905590 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s"] Mar 20 16:14:20 crc kubenswrapper[4730]: I0320 16:14:20.930782 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvsxx" Mar 20 16:14:20 crc kubenswrapper[4730]: I0320 16:14:20.930839 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 20 16:14:20 crc kubenswrapper[4730]: I0320 16:14:20.931443 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 20 16:14:20 crc kubenswrapper[4730]: I0320 16:14:20.931698 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 20 16:14:21 crc kubenswrapper[4730]: I0320 16:14:21.057773 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b49a7544-a685-49c3-81fa-e1bbec4453ba-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s\" (UID: \"b49a7544-a685-49c3-81fa-e1bbec4453ba\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s" Mar 20 16:14:21 crc kubenswrapper[4730]: I0320 16:14:21.058774 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs9fl\" (UniqueName: \"kubernetes.io/projected/b49a7544-a685-49c3-81fa-e1bbec4453ba-kube-api-access-hs9fl\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s\" (UID: \"b49a7544-a685-49c3-81fa-e1bbec4453ba\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s" Mar 20 16:14:21 crc kubenswrapper[4730]: I0320 16:14:21.058925 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b49a7544-a685-49c3-81fa-e1bbec4453ba-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s\" (UID: \"b49a7544-a685-49c3-81fa-e1bbec4453ba\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s" Mar 20 16:14:21 crc kubenswrapper[4730]: I0320 16:14:21.160954 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b49a7544-a685-49c3-81fa-e1bbec4453ba-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s\" (UID: \"b49a7544-a685-49c3-81fa-e1bbec4453ba\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s" Mar 20 16:14:21 crc kubenswrapper[4730]: I0320 16:14:21.161126 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs9fl\" (UniqueName: \"kubernetes.io/projected/b49a7544-a685-49c3-81fa-e1bbec4453ba-kube-api-access-hs9fl\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s\" (UID: \"b49a7544-a685-49c3-81fa-e1bbec4453ba\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s" Mar 20 16:14:21 crc kubenswrapper[4730]: I0320 16:14:21.161178 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b49a7544-a685-49c3-81fa-e1bbec4453ba-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s\" (UID: \"b49a7544-a685-49c3-81fa-e1bbec4453ba\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s" Mar 20 16:14:21 crc kubenswrapper[4730]: I0320 16:14:21.166311 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b49a7544-a685-49c3-81fa-e1bbec4453ba-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s\" (UID: \"b49a7544-a685-49c3-81fa-e1bbec4453ba\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s" Mar 20 16:14:21 crc kubenswrapper[4730]: I0320 16:14:21.168507 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b49a7544-a685-49c3-81fa-e1bbec4453ba-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s\" (UID: \"b49a7544-a685-49c3-81fa-e1bbec4453ba\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s" Mar 20 16:14:21 crc kubenswrapper[4730]: I0320 16:14:21.188095 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs9fl\" (UniqueName: \"kubernetes.io/projected/b49a7544-a685-49c3-81fa-e1bbec4453ba-kube-api-access-hs9fl\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s\" (UID: \"b49a7544-a685-49c3-81fa-e1bbec4453ba\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s" Mar 20 16:14:21 crc kubenswrapper[4730]: I0320 16:14:21.250714 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s" Mar 20 16:14:21 crc kubenswrapper[4730]: I0320 16:14:21.758429 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s"] Mar 20 16:14:21 crc kubenswrapper[4730]: I0320 16:14:21.796879 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s" event={"ID":"b49a7544-a685-49c3-81fa-e1bbec4453ba","Type":"ContainerStarted","Data":"63fc1fd4c0c125b6658b4511d055c2663314adc2c857f6656144f940ae01a113"} Mar 20 16:14:22 crc kubenswrapper[4730]: I0320 16:14:22.807067 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s" event={"ID":"b49a7544-a685-49c3-81fa-e1bbec4453ba","Type":"ContainerStarted","Data":"b263b8f0c1f575dac6afecb99dcc89954593438c3ff4e5d432e149952ec178a8"} Mar 20 16:14:22 crc kubenswrapper[4730]: I0320 16:14:22.824277 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s" podStartSLOduration=2.373506203 podStartE2EDuration="2.82423125s" podCreationTimestamp="2026-03-20 16:14:20 +0000 UTC" firstStartedPulling="2026-03-20 16:14:21.765067352 +0000 UTC m=+2120.978438711" lastFinishedPulling="2026-03-20 16:14:22.215792389 +0000 UTC m=+2121.429163758" observedRunningTime="2026-03-20 16:14:22.821346819 +0000 UTC m=+2122.034718188" watchObservedRunningTime="2026-03-20 16:14:22.82423125 +0000 UTC m=+2122.037602619" Mar 20 16:14:31 crc kubenswrapper[4730]: I0320 16:14:31.917813 4730 generic.go:334] "Generic (PLEG): container finished" podID="b49a7544-a685-49c3-81fa-e1bbec4453ba" containerID="b263b8f0c1f575dac6afecb99dcc89954593438c3ff4e5d432e149952ec178a8" exitCode=0 Mar 20 16:14:31 crc kubenswrapper[4730]: I0320 16:14:31.917882 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s" event={"ID":"b49a7544-a685-49c3-81fa-e1bbec4453ba","Type":"ContainerDied","Data":"b263b8f0c1f575dac6afecb99dcc89954593438c3ff4e5d432e149952ec178a8"} Mar 20 16:14:33 crc kubenswrapper[4730]: I0320 16:14:33.350913 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s" Mar 20 16:14:33 crc kubenswrapper[4730]: I0320 16:14:33.374933 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b49a7544-a685-49c3-81fa-e1bbec4453ba-ssh-key-openstack-edpm-ipam\") pod \"b49a7544-a685-49c3-81fa-e1bbec4453ba\" (UID: \"b49a7544-a685-49c3-81fa-e1bbec4453ba\") " Mar 20 16:14:33 crc kubenswrapper[4730]: I0320 16:14:33.375050 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs9fl\" (UniqueName: \"kubernetes.io/projected/b49a7544-a685-49c3-81fa-e1bbec4453ba-kube-api-access-hs9fl\") pod \"b49a7544-a685-49c3-81fa-e1bbec4453ba\" (UID: \"b49a7544-a685-49c3-81fa-e1bbec4453ba\") " Mar 20 16:14:33 crc kubenswrapper[4730]: I0320 16:14:33.375087 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b49a7544-a685-49c3-81fa-e1bbec4453ba-inventory\") pod \"b49a7544-a685-49c3-81fa-e1bbec4453ba\" (UID: \"b49a7544-a685-49c3-81fa-e1bbec4453ba\") " Mar 20 16:14:33 crc kubenswrapper[4730]: I0320 16:14:33.420897 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b49a7544-a685-49c3-81fa-e1bbec4453ba-kube-api-access-hs9fl" (OuterVolumeSpecName: "kube-api-access-hs9fl") pod "b49a7544-a685-49c3-81fa-e1bbec4453ba" (UID: "b49a7544-a685-49c3-81fa-e1bbec4453ba"). InnerVolumeSpecName "kube-api-access-hs9fl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:14:33 crc kubenswrapper[4730]: E0320 16:14:33.420983 4730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b49a7544-a685-49c3-81fa-e1bbec4453ba-ssh-key-openstack-edpm-ipam podName:b49a7544-a685-49c3-81fa-e1bbec4453ba nodeName:}" failed. No retries permitted until 2026-03-20 16:14:33.920950456 +0000 UTC m=+2133.134321835 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key-openstack-edpm-ipam" (UniqueName: "kubernetes.io/secret/b49a7544-a685-49c3-81fa-e1bbec4453ba-ssh-key-openstack-edpm-ipam") pod "b49a7544-a685-49c3-81fa-e1bbec4453ba" (UID: "b49a7544-a685-49c3-81fa-e1bbec4453ba") : error deleting /var/lib/kubelet/pods/b49a7544-a685-49c3-81fa-e1bbec4453ba/volume-subpaths: remove /var/lib/kubelet/pods/b49a7544-a685-49c3-81fa-e1bbec4453ba/volume-subpaths: no such file or directory Mar 20 16:14:33 crc kubenswrapper[4730]: I0320 16:14:33.423738 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b49a7544-a685-49c3-81fa-e1bbec4453ba-inventory" (OuterVolumeSpecName: "inventory") pod "b49a7544-a685-49c3-81fa-e1bbec4453ba" (UID: "b49a7544-a685-49c3-81fa-e1bbec4453ba"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:14:33 crc kubenswrapper[4730]: I0320 16:14:33.477929 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs9fl\" (UniqueName: \"kubernetes.io/projected/b49a7544-a685-49c3-81fa-e1bbec4453ba-kube-api-access-hs9fl\") on node \"crc\" DevicePath \"\"" Mar 20 16:14:33 crc kubenswrapper[4730]: I0320 16:14:33.477970 4730 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b49a7544-a685-49c3-81fa-e1bbec4453ba-inventory\") on node \"crc\" DevicePath \"\"" Mar 20 16:14:33 crc kubenswrapper[4730]: I0320 16:14:33.939964 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s" event={"ID":"b49a7544-a685-49c3-81fa-e1bbec4453ba","Type":"ContainerDied","Data":"63fc1fd4c0c125b6658b4511d055c2663314adc2c857f6656144f940ae01a113"} Mar 20 16:14:33 crc kubenswrapper[4730]: I0320 16:14:33.940482 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63fc1fd4c0c125b6658b4511d055c2663314adc2c857f6656144f940ae01a113" Mar 20 16:14:33 crc kubenswrapper[4730]: I0320 16:14:33.940016 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s" Mar 20 16:14:33 crc kubenswrapper[4730]: I0320 16:14:33.988922 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b49a7544-a685-49c3-81fa-e1bbec4453ba-ssh-key-openstack-edpm-ipam\") pod \"b49a7544-a685-49c3-81fa-e1bbec4453ba\" (UID: \"b49a7544-a685-49c3-81fa-e1bbec4453ba\") " Mar 20 16:14:33 crc kubenswrapper[4730]: I0320 16:14:33.994821 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b49a7544-a685-49c3-81fa-e1bbec4453ba-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b49a7544-a685-49c3-81fa-e1bbec4453ba" (UID: "b49a7544-a685-49c3-81fa-e1bbec4453ba"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.054820 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n"] Mar 20 16:14:34 crc kubenswrapper[4730]: E0320 16:14:34.056025 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b49a7544-a685-49c3-81fa-e1bbec4453ba" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.056048 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="b49a7544-a685-49c3-81fa-e1bbec4453ba" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.056577 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="b49a7544-a685-49c3-81fa-e1bbec4453ba" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.057899 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.067551 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.067867 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.068987 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.069163 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.087585 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n"] Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.099540 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.099592 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.099624 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.099663 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.099698 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.099717 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.099738 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.099757 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.099782 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.099810 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.099833 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.099870 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.099910 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhrdz\" (UniqueName: \"kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-kube-api-access-dhrdz\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.099928 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.100000 4730 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b49a7544-a685-49c3-81fa-e1bbec4453ba-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.201808 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.201889 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhrdz\" (UniqueName: \"kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-kube-api-access-dhrdz\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.201911 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.201968 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.201995 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.202019 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.202053 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.202089 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.202111 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.202169 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.202188 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.202219 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.202261 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.202285 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.205903 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.206623 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.207231 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.207474 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.207840 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.208256 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.208330 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.208821 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.209175 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.209823 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.211545 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.212581 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.213545 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.226564 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhrdz\" (UniqueName: \"kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-kube-api-access-dhrdz\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4d78n\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.411608 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:14:34 crc kubenswrapper[4730]: I0320 16:14:34.987338 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n"] Mar 20 16:14:34 crc kubenswrapper[4730]: W0320 16:14:34.988772 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod423144fa_9b01_4466_993c_6ab7075e1ad5.slice/crio-21c9b74b7d25aff2921555a89646dca1c3794f9a5ed3d4441b6c26e7fd3963a0 WatchSource:0}: Error finding container 21c9b74b7d25aff2921555a89646dca1c3794f9a5ed3d4441b6c26e7fd3963a0: Status 404 returned error can't find the container with id 21c9b74b7d25aff2921555a89646dca1c3794f9a5ed3d4441b6c26e7fd3963a0 Mar 20 16:14:35 crc kubenswrapper[4730]: I0320 16:14:35.961280 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" event={"ID":"423144fa-9b01-4466-993c-6ab7075e1ad5","Type":"ContainerStarted","Data":"01fd28163e9e73afde319534feb3ad11d63319d2239c28468dad672e5ab2ebe3"} Mar 20 16:14:35 crc kubenswrapper[4730]: I0320 16:14:35.961887 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" event={"ID":"423144fa-9b01-4466-993c-6ab7075e1ad5","Type":"ContainerStarted","Data":"21c9b74b7d25aff2921555a89646dca1c3794f9a5ed3d4441b6c26e7fd3963a0"} Mar 20 16:14:35 crc kubenswrapper[4730]: I0320 16:14:35.996427 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" podStartSLOduration=1.499690779 podStartE2EDuration="1.996404795s" podCreationTimestamp="2026-03-20 16:14:34 +0000 UTC" firstStartedPulling="2026-03-20 16:14:34.991736842 +0000 UTC m=+2134.205108211" lastFinishedPulling="2026-03-20 16:14:35.488450838 +0000 UTC m=+2134.701822227" observedRunningTime="2026-03-20 16:14:35.992462236 +0000 UTC m=+2135.205833615" watchObservedRunningTime="2026-03-20 16:14:35.996404795 +0000 UTC m=+2135.209776174" Mar 20 16:14:49 crc kubenswrapper[4730]: I0320 16:14:49.055795 4730 scope.go:117] "RemoveContainer" containerID="3781eddb5c4e3f16097f248108ceebb43195728c87b3ad6512e75bc75dffb2bb" Mar 20 16:14:49 crc kubenswrapper[4730]: I0320 16:14:49.422684 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nlxpq"] Mar 20 16:14:49 crc kubenswrapper[4730]: I0320 16:14:49.428429 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nlxpq" Mar 20 16:14:49 crc kubenswrapper[4730]: I0320 16:14:49.434944 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nlxpq"] Mar 20 16:14:49 crc kubenswrapper[4730]: I0320 16:14:49.538813 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3-utilities\") pod \"redhat-operators-nlxpq\" (UID: \"f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3\") " pod="openshift-marketplace/redhat-operators-nlxpq" Mar 20 16:14:49 crc kubenswrapper[4730]: I0320 16:14:49.539141 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97jfs\" (UniqueName: \"kubernetes.io/projected/f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3-kube-api-access-97jfs\") pod \"redhat-operators-nlxpq\" (UID: \"f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3\") " pod="openshift-marketplace/redhat-operators-nlxpq" Mar 20 16:14:49 crc kubenswrapper[4730]: I0320 16:14:49.539504 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3-catalog-content\") pod \"redhat-operators-nlxpq\" (UID: \"f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3\") " pod="openshift-marketplace/redhat-operators-nlxpq" Mar 20 16:14:49 crc kubenswrapper[4730]: I0320 16:14:49.641377 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3-catalog-content\") pod \"redhat-operators-nlxpq\" (UID: \"f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3\") " pod="openshift-marketplace/redhat-operators-nlxpq" Mar 20 16:14:49 crc kubenswrapper[4730]: I0320 16:14:49.641501 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3-utilities\") pod \"redhat-operators-nlxpq\" (UID: \"f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3\") " pod="openshift-marketplace/redhat-operators-nlxpq" Mar 20 16:14:49 crc kubenswrapper[4730]: I0320 16:14:49.641543 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97jfs\" (UniqueName: \"kubernetes.io/projected/f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3-kube-api-access-97jfs\") pod \"redhat-operators-nlxpq\" (UID: \"f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3\") " pod="openshift-marketplace/redhat-operators-nlxpq" Mar 20 16:14:49 crc kubenswrapper[4730]: I0320 16:14:49.642005 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3-catalog-content\") pod \"redhat-operators-nlxpq\" (UID: \"f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3\") " pod="openshift-marketplace/redhat-operators-nlxpq" Mar 20 16:14:49 crc kubenswrapper[4730]: I0320 16:14:49.642050 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3-utilities\") pod \"redhat-operators-nlxpq\" (UID: \"f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3\") " pod="openshift-marketplace/redhat-operators-nlxpq" Mar 20 16:14:49 crc kubenswrapper[4730]: I0320 16:14:49.663738 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97jfs\" (UniqueName: \"kubernetes.io/projected/f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3-kube-api-access-97jfs\") pod \"redhat-operators-nlxpq\" (UID: \"f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3\") " pod="openshift-marketplace/redhat-operators-nlxpq" Mar 20 16:14:49 crc kubenswrapper[4730]: I0320 16:14:49.762634 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nlxpq" Mar 20 16:14:50 crc kubenswrapper[4730]: I0320 16:14:50.252468 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nlxpq"] Mar 20 16:14:50 crc kubenswrapper[4730]: W0320 16:14:50.257737 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7f46bf2_4fe4_4576_a9f1_eeec5b2bf8c3.slice/crio-2fba6e13ca4ae6246c5ca75c7721bb78ad010f572aced34848817074b527537d WatchSource:0}: Error finding container 2fba6e13ca4ae6246c5ca75c7721bb78ad010f572aced34848817074b527537d: Status 404 returned error can't find the container with id 2fba6e13ca4ae6246c5ca75c7721bb78ad010f572aced34848817074b527537d Mar 20 16:14:51 crc kubenswrapper[4730]: I0320 16:14:51.106295 4730 generic.go:334] "Generic (PLEG): container finished" podID="f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3" containerID="9ca380081388b6f9c9877cc291353929dee2da6133f14e020a397e8856a314bb" exitCode=0 Mar 20 16:14:51 crc kubenswrapper[4730]: I0320 16:14:51.106398 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlxpq" event={"ID":"f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3","Type":"ContainerDied","Data":"9ca380081388b6f9c9877cc291353929dee2da6133f14e020a397e8856a314bb"} Mar 20 16:14:51 crc kubenswrapper[4730]: I0320 16:14:51.106632 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlxpq" event={"ID":"f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3","Type":"ContainerStarted","Data":"2fba6e13ca4ae6246c5ca75c7721bb78ad010f572aced34848817074b527537d"} Mar 20 16:14:52 crc kubenswrapper[4730]: I0320 16:14:52.115375 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlxpq" event={"ID":"f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3","Type":"ContainerStarted","Data":"66dbf75889a1e65837bed6622eafa5ba26cfd66025601ae0dc945c0e8dce250e"} Mar 20 16:14:54 crc kubenswrapper[4730]: I0320 16:14:54.137405 4730 generic.go:334] "Generic (PLEG): container finished" podID="f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3" containerID="66dbf75889a1e65837bed6622eafa5ba26cfd66025601ae0dc945c0e8dce250e" exitCode=0 Mar 20 16:14:54 crc kubenswrapper[4730]: I0320 16:14:54.137516 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlxpq" event={"ID":"f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3","Type":"ContainerDied","Data":"66dbf75889a1e65837bed6622eafa5ba26cfd66025601ae0dc945c0e8dce250e"} Mar 20 16:14:55 crc kubenswrapper[4730]: I0320 16:14:55.153654 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlxpq" event={"ID":"f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3","Type":"ContainerStarted","Data":"b04b9a549015f8905f380aaf8421cc03f90628d6c219b0109a0e893083103713"} Mar 20 16:14:55 crc kubenswrapper[4730]: I0320 16:14:55.185290 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nlxpq" podStartSLOduration=2.8084730540000002 podStartE2EDuration="6.185226137s" podCreationTimestamp="2026-03-20 16:14:49 +0000 UTC" firstStartedPulling="2026-03-20 16:14:51.167308318 +0000 UTC m=+2150.380679687" lastFinishedPulling="2026-03-20 16:14:54.544061401 +0000 UTC m=+2153.757432770" observedRunningTime="2026-03-20 16:14:55.176339116 +0000 UTC m=+2154.389710545" watchObservedRunningTime="2026-03-20 16:14:55.185226137 +0000 UTC m=+2154.398597546" Mar 20 16:14:59 crc kubenswrapper[4730]: I0320 16:14:59.763355 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nlxpq" Mar 20 16:14:59 crc kubenswrapper[4730]: I0320 16:14:59.764045 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nlxpq" Mar 20 16:15:00 crc kubenswrapper[4730]: I0320 16:15:00.145149 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567055-r9s4z"] Mar 20 16:15:00 crc kubenswrapper[4730]: I0320 16:15:00.146688 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567055-r9s4z" Mar 20 16:15:00 crc kubenswrapper[4730]: I0320 16:15:00.150298 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 20 16:15:00 crc kubenswrapper[4730]: I0320 16:15:00.150544 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 20 16:15:00 crc kubenswrapper[4730]: I0320 16:15:00.157697 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567055-r9s4z"] Mar 20 16:15:00 crc kubenswrapper[4730]: I0320 16:15:00.302613 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5af1b002-c577-4334-8304-5f44a67a5119-config-volume\") pod \"collect-profiles-29567055-r9s4z\" (UID: \"5af1b002-c577-4334-8304-5f44a67a5119\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567055-r9s4z" Mar 20 16:15:00 crc kubenswrapper[4730]: I0320 16:15:00.302768 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsl7k\" (UniqueName: \"kubernetes.io/projected/5af1b002-c577-4334-8304-5f44a67a5119-kube-api-access-gsl7k\") pod \"collect-profiles-29567055-r9s4z\" (UID: \"5af1b002-c577-4334-8304-5f44a67a5119\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567055-r9s4z" Mar 20 16:15:00 crc kubenswrapper[4730]: I0320 16:15:00.302878 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5af1b002-c577-4334-8304-5f44a67a5119-secret-volume\") pod \"collect-profiles-29567055-r9s4z\" (UID: \"5af1b002-c577-4334-8304-5f44a67a5119\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567055-r9s4z" Mar 20 16:15:00 crc kubenswrapper[4730]: I0320 16:15:00.404575 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5af1b002-c577-4334-8304-5f44a67a5119-secret-volume\") pod \"collect-profiles-29567055-r9s4z\" (UID: \"5af1b002-c577-4334-8304-5f44a67a5119\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567055-r9s4z" Mar 20 16:15:00 crc kubenswrapper[4730]: I0320 16:15:00.404970 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5af1b002-c577-4334-8304-5f44a67a5119-config-volume\") pod \"collect-profiles-29567055-r9s4z\" (UID: \"5af1b002-c577-4334-8304-5f44a67a5119\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567055-r9s4z" Mar 20 16:15:00 crc kubenswrapper[4730]: I0320 16:15:00.405095 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsl7k\" (UniqueName: \"kubernetes.io/projected/5af1b002-c577-4334-8304-5f44a67a5119-kube-api-access-gsl7k\") pod \"collect-profiles-29567055-r9s4z\" (UID: \"5af1b002-c577-4334-8304-5f44a67a5119\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567055-r9s4z" Mar 20 16:15:00 crc kubenswrapper[4730]: I0320 16:15:00.405876 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5af1b002-c577-4334-8304-5f44a67a5119-config-volume\") pod \"collect-profiles-29567055-r9s4z\" (UID: \"5af1b002-c577-4334-8304-5f44a67a5119\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567055-r9s4z" Mar 20 16:15:00 crc kubenswrapper[4730]: I0320 16:15:00.416398 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5af1b002-c577-4334-8304-5f44a67a5119-secret-volume\") pod \"collect-profiles-29567055-r9s4z\" (UID: \"5af1b002-c577-4334-8304-5f44a67a5119\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567055-r9s4z" Mar 20 16:15:00 crc kubenswrapper[4730]: I0320 16:15:00.430237 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsl7k\" (UniqueName: \"kubernetes.io/projected/5af1b002-c577-4334-8304-5f44a67a5119-kube-api-access-gsl7k\") pod \"collect-profiles-29567055-r9s4z\" (UID: \"5af1b002-c577-4334-8304-5f44a67a5119\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567055-r9s4z" Mar 20 16:15:00 crc kubenswrapper[4730]: I0320 16:15:00.465609 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567055-r9s4z" Mar 20 16:15:00 crc kubenswrapper[4730]: I0320 16:15:00.828102 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlxpq" podUID="f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3" containerName="registry-server" probeResult="failure" output=< Mar 20 16:15:00 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Mar 20 16:15:00 crc kubenswrapper[4730]: > Mar 20 16:15:00 crc kubenswrapper[4730]: I0320 16:15:00.986020 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567055-r9s4z"] Mar 20 16:15:01 crc kubenswrapper[4730]: I0320 16:15:01.216534 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567055-r9s4z" event={"ID":"5af1b002-c577-4334-8304-5f44a67a5119","Type":"ContainerStarted","Data":"e6a0ef74485773b0b9248ebe8daaaea6116dc164de300c92368c1b1d44b5c372"} Mar 20 16:15:01 crc kubenswrapper[4730]: I0320 16:15:01.217135 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567055-r9s4z" event={"ID":"5af1b002-c577-4334-8304-5f44a67a5119","Type":"ContainerStarted","Data":"b2d223de9580f697cadc3d5b517a4d5741fc74d1cc4709134ea792dc8f34fde0"} Mar 20 16:15:01 crc kubenswrapper[4730]: I0320 16:15:01.240534 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29567055-r9s4z" podStartSLOduration=1.240511597 podStartE2EDuration="1.240511597s" podCreationTimestamp="2026-03-20 16:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:15:01.235201608 +0000 UTC m=+2160.448572997" watchObservedRunningTime="2026-03-20 16:15:01.240511597 +0000 UTC m=+2160.453882966" Mar 20 16:15:02 crc kubenswrapper[4730]: I0320 16:15:02.250343 4730 generic.go:334] "Generic (PLEG): container finished" podID="5af1b002-c577-4334-8304-5f44a67a5119" containerID="e6a0ef74485773b0b9248ebe8daaaea6116dc164de300c92368c1b1d44b5c372" exitCode=0 Mar 20 16:15:02 crc kubenswrapper[4730]: I0320 16:15:02.250715 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567055-r9s4z" event={"ID":"5af1b002-c577-4334-8304-5f44a67a5119","Type":"ContainerDied","Data":"e6a0ef74485773b0b9248ebe8daaaea6116dc164de300c92368c1b1d44b5c372"} Mar 20 16:15:03 crc kubenswrapper[4730]: I0320 16:15:03.711240 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567055-r9s4z" Mar 20 16:15:03 crc kubenswrapper[4730]: I0320 16:15:03.778215 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5af1b002-c577-4334-8304-5f44a67a5119-secret-volume\") pod \"5af1b002-c577-4334-8304-5f44a67a5119\" (UID: \"5af1b002-c577-4334-8304-5f44a67a5119\") " Mar 20 16:15:03 crc kubenswrapper[4730]: I0320 16:15:03.778417 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsl7k\" (UniqueName: \"kubernetes.io/projected/5af1b002-c577-4334-8304-5f44a67a5119-kube-api-access-gsl7k\") pod \"5af1b002-c577-4334-8304-5f44a67a5119\" (UID: \"5af1b002-c577-4334-8304-5f44a67a5119\") " Mar 20 16:15:03 crc kubenswrapper[4730]: I0320 16:15:03.778473 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5af1b002-c577-4334-8304-5f44a67a5119-config-volume\") pod \"5af1b002-c577-4334-8304-5f44a67a5119\" (UID: \"5af1b002-c577-4334-8304-5f44a67a5119\") " Mar 20 16:15:03 crc kubenswrapper[4730]: I0320 16:15:03.778992 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5af1b002-c577-4334-8304-5f44a67a5119-config-volume" (OuterVolumeSpecName: "config-volume") pod "5af1b002-c577-4334-8304-5f44a67a5119" (UID: "5af1b002-c577-4334-8304-5f44a67a5119"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:15:03 crc kubenswrapper[4730]: I0320 16:15:03.785106 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5af1b002-c577-4334-8304-5f44a67a5119-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5af1b002-c577-4334-8304-5f44a67a5119" (UID: "5af1b002-c577-4334-8304-5f44a67a5119"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:15:03 crc kubenswrapper[4730]: I0320 16:15:03.795489 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5af1b002-c577-4334-8304-5f44a67a5119-kube-api-access-gsl7k" (OuterVolumeSpecName: "kube-api-access-gsl7k") pod "5af1b002-c577-4334-8304-5f44a67a5119" (UID: "5af1b002-c577-4334-8304-5f44a67a5119"). InnerVolumeSpecName "kube-api-access-gsl7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:15:03 crc kubenswrapper[4730]: I0320 16:15:03.881383 4730 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5af1b002-c577-4334-8304-5f44a67a5119-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 16:15:03 crc kubenswrapper[4730]: I0320 16:15:03.881416 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsl7k\" (UniqueName: \"kubernetes.io/projected/5af1b002-c577-4334-8304-5f44a67a5119-kube-api-access-gsl7k\") on node \"crc\" DevicePath \"\"" Mar 20 16:15:03 crc kubenswrapper[4730]: I0320 16:15:03.881425 4730 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5af1b002-c577-4334-8304-5f44a67a5119-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 16:15:04 crc kubenswrapper[4730]: I0320 16:15:04.273648 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567055-r9s4z" event={"ID":"5af1b002-c577-4334-8304-5f44a67a5119","Type":"ContainerDied","Data":"b2d223de9580f697cadc3d5b517a4d5741fc74d1cc4709134ea792dc8f34fde0"} Mar 20 16:15:04 crc kubenswrapper[4730]: I0320 16:15:04.273998 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2d223de9580f697cadc3d5b517a4d5741fc74d1cc4709134ea792dc8f34fde0" Mar 20 16:15:04 crc kubenswrapper[4730]: I0320 16:15:04.273689 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567055-r9s4z" Mar 20 16:15:04 crc kubenswrapper[4730]: I0320 16:15:04.319530 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567010-d69bc"] Mar 20 16:15:04 crc kubenswrapper[4730]: I0320 16:15:04.328138 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567010-d69bc"] Mar 20 16:15:05 crc kubenswrapper[4730]: I0320 16:15:05.548115 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be19fb65-a04f-42df-9b96-e620b58754bb" path="/var/lib/kubelet/pods/be19fb65-a04f-42df-9b96-e620b58754bb/volumes" Mar 20 16:15:09 crc kubenswrapper[4730]: I0320 16:15:09.813431 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nlxpq" Mar 20 16:15:09 crc kubenswrapper[4730]: I0320 16:15:09.873362 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nlxpq" Mar 20 16:15:10 crc kubenswrapper[4730]: I0320 16:15:10.055705 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nlxpq"] Mar 20 16:15:11 crc kubenswrapper[4730]: I0320 16:15:11.332918 4730 generic.go:334] "Generic (PLEG): container finished" podID="423144fa-9b01-4466-993c-6ab7075e1ad5" containerID="01fd28163e9e73afde319534feb3ad11d63319d2239c28468dad672e5ab2ebe3" exitCode=0 Mar 20 16:15:11 crc kubenswrapper[4730]: I0320 16:15:11.333012 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" event={"ID":"423144fa-9b01-4466-993c-6ab7075e1ad5","Type":"ContainerDied","Data":"01fd28163e9e73afde319534feb3ad11d63319d2239c28468dad672e5ab2ebe3"} Mar 20 16:15:11 crc kubenswrapper[4730]: I0320 16:15:11.333414 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nlxpq" podUID="f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3" containerName="registry-server" containerID="cri-o://b04b9a549015f8905f380aaf8421cc03f90628d6c219b0109a0e893083103713" gracePeriod=2 Mar 20 16:15:11 crc kubenswrapper[4730]: I0320 16:15:11.786409 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nlxpq" Mar 20 16:15:11 crc kubenswrapper[4730]: I0320 16:15:11.845792 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3-catalog-content\") pod \"f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3\" (UID: \"f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3\") " Mar 20 16:15:11 crc kubenswrapper[4730]: I0320 16:15:11.845916 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97jfs\" (UniqueName: \"kubernetes.io/projected/f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3-kube-api-access-97jfs\") pod \"f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3\" (UID: \"f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3\") " Mar 20 16:15:11 crc kubenswrapper[4730]: I0320 16:15:11.845949 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3-utilities\") pod \"f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3\" (UID: \"f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3\") " Mar 20 16:15:11 crc kubenswrapper[4730]: I0320 16:15:11.846915 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3-utilities" (OuterVolumeSpecName: "utilities") pod "f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3" (UID: "f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:15:11 crc kubenswrapper[4730]: I0320 16:15:11.851490 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3-kube-api-access-97jfs" (OuterVolumeSpecName: "kube-api-access-97jfs") pod "f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3" (UID: "f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3"). InnerVolumeSpecName "kube-api-access-97jfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:15:11 crc kubenswrapper[4730]: I0320 16:15:11.948133 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97jfs\" (UniqueName: \"kubernetes.io/projected/f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3-kube-api-access-97jfs\") on node \"crc\" DevicePath \"\"" Mar 20 16:15:11 crc kubenswrapper[4730]: I0320 16:15:11.948404 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 16:15:11 crc kubenswrapper[4730]: I0320 16:15:11.981665 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3" (UID: "f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.050112 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.345383 4730 generic.go:334] "Generic (PLEG): container finished" podID="f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3" containerID="b04b9a549015f8905f380aaf8421cc03f90628d6c219b0109a0e893083103713" exitCode=0 Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.345443 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlxpq" event={"ID":"f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3","Type":"ContainerDied","Data":"b04b9a549015f8905f380aaf8421cc03f90628d6c219b0109a0e893083103713"} Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.345477 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlxpq" event={"ID":"f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3","Type":"ContainerDied","Data":"2fba6e13ca4ae6246c5ca75c7721bb78ad010f572aced34848817074b527537d"} Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.345493 4730 scope.go:117] "RemoveContainer" containerID="b04b9a549015f8905f380aaf8421cc03f90628d6c219b0109a0e893083103713" Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.345501 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nlxpq" Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.382743 4730 scope.go:117] "RemoveContainer" containerID="66dbf75889a1e65837bed6622eafa5ba26cfd66025601ae0dc945c0e8dce250e" Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.395627 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nlxpq"] Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.443461 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nlxpq"] Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.491787 4730 scope.go:117] "RemoveContainer" containerID="9ca380081388b6f9c9877cc291353929dee2da6133f14e020a397e8856a314bb" Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.539180 4730 scope.go:117] "RemoveContainer" containerID="b04b9a549015f8905f380aaf8421cc03f90628d6c219b0109a0e893083103713" Mar 20 16:15:12 crc kubenswrapper[4730]: E0320 16:15:12.539608 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b04b9a549015f8905f380aaf8421cc03f90628d6c219b0109a0e893083103713\": container with ID starting with b04b9a549015f8905f380aaf8421cc03f90628d6c219b0109a0e893083103713 not found: ID does not exist" containerID="b04b9a549015f8905f380aaf8421cc03f90628d6c219b0109a0e893083103713" Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.539647 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b04b9a549015f8905f380aaf8421cc03f90628d6c219b0109a0e893083103713"} err="failed to get container status \"b04b9a549015f8905f380aaf8421cc03f90628d6c219b0109a0e893083103713\": rpc error: code = NotFound desc = could not find container \"b04b9a549015f8905f380aaf8421cc03f90628d6c219b0109a0e893083103713\": container with ID starting with b04b9a549015f8905f380aaf8421cc03f90628d6c219b0109a0e893083103713 not found: ID does not exist" Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.539679 4730 scope.go:117] "RemoveContainer" containerID="66dbf75889a1e65837bed6622eafa5ba26cfd66025601ae0dc945c0e8dce250e" Mar 20 16:15:12 crc kubenswrapper[4730]: E0320 16:15:12.543728 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66dbf75889a1e65837bed6622eafa5ba26cfd66025601ae0dc945c0e8dce250e\": container with ID starting with 66dbf75889a1e65837bed6622eafa5ba26cfd66025601ae0dc945c0e8dce250e not found: ID does not exist" containerID="66dbf75889a1e65837bed6622eafa5ba26cfd66025601ae0dc945c0e8dce250e" Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.543779 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66dbf75889a1e65837bed6622eafa5ba26cfd66025601ae0dc945c0e8dce250e"} err="failed to get container status \"66dbf75889a1e65837bed6622eafa5ba26cfd66025601ae0dc945c0e8dce250e\": rpc error: code = NotFound desc = could not find container \"66dbf75889a1e65837bed6622eafa5ba26cfd66025601ae0dc945c0e8dce250e\": container with ID starting with 66dbf75889a1e65837bed6622eafa5ba26cfd66025601ae0dc945c0e8dce250e not found: ID does not exist" Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.543812 4730 scope.go:117] "RemoveContainer" containerID="9ca380081388b6f9c9877cc291353929dee2da6133f14e020a397e8856a314bb" Mar 20 16:15:12 crc kubenswrapper[4730]: E0320 16:15:12.547684 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ca380081388b6f9c9877cc291353929dee2da6133f14e020a397e8856a314bb\": container with ID starting with 9ca380081388b6f9c9877cc291353929dee2da6133f14e020a397e8856a314bb not found: ID does not exist" containerID="9ca380081388b6f9c9877cc291353929dee2da6133f14e020a397e8856a314bb" Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.547738 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ca380081388b6f9c9877cc291353929dee2da6133f14e020a397e8856a314bb"} err="failed to get container status \"9ca380081388b6f9c9877cc291353929dee2da6133f14e020a397e8856a314bb\": rpc error: code = NotFound desc = could not find container \"9ca380081388b6f9c9877cc291353929dee2da6133f14e020a397e8856a314bb\": container with ID starting with 9ca380081388b6f9c9877cc291353929dee2da6133f14e020a397e8856a314bb not found: ID does not exist" Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.890152 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.991392 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-ovn-combined-ca-bundle\") pod \"423144fa-9b01-4466-993c-6ab7075e1ad5\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.991509 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-inventory\") pod \"423144fa-9b01-4466-993c-6ab7075e1ad5\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.991607 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"423144fa-9b01-4466-993c-6ab7075e1ad5\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.991698 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-neutron-metadata-combined-ca-bundle\") pod \"423144fa-9b01-4466-993c-6ab7075e1ad5\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.991764 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-openstack-edpm-ipam-ovn-default-certs-0\") pod \"423144fa-9b01-4466-993c-6ab7075e1ad5\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.991819 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-libvirt-combined-ca-bundle\") pod \"423144fa-9b01-4466-993c-6ab7075e1ad5\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.991873 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-telemetry-combined-ca-bundle\") pod \"423144fa-9b01-4466-993c-6ab7075e1ad5\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.991933 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"423144fa-9b01-4466-993c-6ab7075e1ad5\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.992057 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-bootstrap-combined-ca-bundle\") pod \"423144fa-9b01-4466-993c-6ab7075e1ad5\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.992158 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-nova-combined-ca-bundle\") pod \"423144fa-9b01-4466-993c-6ab7075e1ad5\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.992238 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-repo-setup-combined-ca-bundle\") pod \"423144fa-9b01-4466-993c-6ab7075e1ad5\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.992331 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"423144fa-9b01-4466-993c-6ab7075e1ad5\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.992407 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhrdz\" (UniqueName: \"kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-kube-api-access-dhrdz\") pod \"423144fa-9b01-4466-993c-6ab7075e1ad5\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.992494 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-ssh-key-openstack-edpm-ipam\") pod \"423144fa-9b01-4466-993c-6ab7075e1ad5\" (UID: \"423144fa-9b01-4466-993c-6ab7075e1ad5\") " Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.997488 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "423144fa-9b01-4466-993c-6ab7075e1ad5" (UID: "423144fa-9b01-4466-993c-6ab7075e1ad5"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.997565 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "423144fa-9b01-4466-993c-6ab7075e1ad5" (UID: "423144fa-9b01-4466-993c-6ab7075e1ad5"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.997619 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "423144fa-9b01-4466-993c-6ab7075e1ad5" (UID: "423144fa-9b01-4466-993c-6ab7075e1ad5"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:15:12 crc kubenswrapper[4730]: I0320 16:15:12.999702 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "423144fa-9b01-4466-993c-6ab7075e1ad5" (UID: "423144fa-9b01-4466-993c-6ab7075e1ad5"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.000190 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "423144fa-9b01-4466-993c-6ab7075e1ad5" (UID: "423144fa-9b01-4466-993c-6ab7075e1ad5"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.000212 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "423144fa-9b01-4466-993c-6ab7075e1ad5" (UID: "423144fa-9b01-4466-993c-6ab7075e1ad5"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.000781 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "423144fa-9b01-4466-993c-6ab7075e1ad5" (UID: "423144fa-9b01-4466-993c-6ab7075e1ad5"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.001333 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "423144fa-9b01-4466-993c-6ab7075e1ad5" (UID: "423144fa-9b01-4466-993c-6ab7075e1ad5"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.001702 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "423144fa-9b01-4466-993c-6ab7075e1ad5" (UID: "423144fa-9b01-4466-993c-6ab7075e1ad5"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.001816 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "423144fa-9b01-4466-993c-6ab7075e1ad5" (UID: "423144fa-9b01-4466-993c-6ab7075e1ad5"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.002706 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-kube-api-access-dhrdz" (OuterVolumeSpecName: "kube-api-access-dhrdz") pod "423144fa-9b01-4466-993c-6ab7075e1ad5" (UID: "423144fa-9b01-4466-993c-6ab7075e1ad5"). InnerVolumeSpecName "kube-api-access-dhrdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.006365 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "423144fa-9b01-4466-993c-6ab7075e1ad5" (UID: "423144fa-9b01-4466-993c-6ab7075e1ad5"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.022939 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-inventory" (OuterVolumeSpecName: "inventory") pod "423144fa-9b01-4466-993c-6ab7075e1ad5" (UID: "423144fa-9b01-4466-993c-6ab7075e1ad5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.050232 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "423144fa-9b01-4466-993c-6ab7075e1ad5" (UID: "423144fa-9b01-4466-993c-6ab7075e1ad5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.094810 4730 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.095057 4730 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.095147 4730 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.095241 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhrdz\" (UniqueName: \"kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-kube-api-access-dhrdz\") on node \"crc\" DevicePath \"\"" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.095347 4730 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.095435 4730 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.095509 4730 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-inventory\") on node \"crc\" DevicePath \"\"" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.095580 4730 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.095639 4730 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.095701 4730 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.095764 4730 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.095822 4730 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.095884 4730 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/423144fa-9b01-4466-993c-6ab7075e1ad5-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.095940 4730 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/423144fa-9b01-4466-993c-6ab7075e1ad5-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.355345 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" event={"ID":"423144fa-9b01-4466-993c-6ab7075e1ad5","Type":"ContainerDied","Data":"21c9b74b7d25aff2921555a89646dca1c3794f9a5ed3d4441b6c26e7fd3963a0"} Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.355394 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21c9b74b7d25aff2921555a89646dca1c3794f9a5ed3d4441b6c26e7fd3963a0" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.355357 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4d78n" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.454658 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-fxwgt"] Mar 20 16:15:13 crc kubenswrapper[4730]: E0320 16:15:13.455168 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af1b002-c577-4334-8304-5f44a67a5119" containerName="collect-profiles" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.455195 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af1b002-c577-4334-8304-5f44a67a5119" containerName="collect-profiles" Mar 20 16:15:13 crc kubenswrapper[4730]: E0320 16:15:13.455220 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3" containerName="extract-utilities" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.455227 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3" containerName="extract-utilities" Mar 20 16:15:13 crc kubenswrapper[4730]: E0320 16:15:13.455240 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="423144fa-9b01-4466-993c-6ab7075e1ad5" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.455249 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="423144fa-9b01-4466-993c-6ab7075e1ad5" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 20 16:15:13 crc kubenswrapper[4730]: E0320 16:15:13.457287 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3" containerName="extract-content" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.457336 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3" containerName="extract-content" Mar 20 16:15:13 crc kubenswrapper[4730]: E0320 16:15:13.457373 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3" containerName="registry-server" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.457382 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3" containerName="registry-server" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.457707 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="5af1b002-c577-4334-8304-5f44a67a5119" containerName="collect-profiles" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.457749 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="423144fa-9b01-4466-993c-6ab7075e1ad5" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.457772 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3" containerName="registry-server" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.458627 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fxwgt" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.463895 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.464415 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.465913 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.465970 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.466346 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvsxx" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.466958 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-fxwgt"] Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.544235 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3" path="/var/lib/kubelet/pods/f7f46bf2-4fe4-4576-a9f1-eeec5b2bf8c3/volumes" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.605139 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fxwgt\" (UID: \"efd41cb9-678e-43d9-8643-b5aa95f1ec3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fxwgt" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.605546 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fxwgt\" (UID: \"efd41cb9-678e-43d9-8643-b5aa95f1ec3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fxwgt" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.605590 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fxwgt\" (UID: \"efd41cb9-678e-43d9-8643-b5aa95f1ec3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fxwgt" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.605721 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8lcv\" (UniqueName: \"kubernetes.io/projected/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-kube-api-access-q8lcv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fxwgt\" (UID: \"efd41cb9-678e-43d9-8643-b5aa95f1ec3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fxwgt" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.605819 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fxwgt\" (UID: \"efd41cb9-678e-43d9-8643-b5aa95f1ec3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fxwgt" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.718397 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fxwgt\" (UID: \"efd41cb9-678e-43d9-8643-b5aa95f1ec3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fxwgt" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.718549 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fxwgt\" (UID: \"efd41cb9-678e-43d9-8643-b5aa95f1ec3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fxwgt" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.718656 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fxwgt\" (UID: \"efd41cb9-678e-43d9-8643-b5aa95f1ec3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fxwgt" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.718678 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fxwgt\" (UID: \"efd41cb9-678e-43d9-8643-b5aa95f1ec3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fxwgt" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.718738 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8lcv\" (UniqueName: \"kubernetes.io/projected/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-kube-api-access-q8lcv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fxwgt\" (UID: \"efd41cb9-678e-43d9-8643-b5aa95f1ec3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fxwgt" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.720825 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fxwgt\" (UID: \"efd41cb9-678e-43d9-8643-b5aa95f1ec3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fxwgt" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.723449 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fxwgt\" (UID: \"efd41cb9-678e-43d9-8643-b5aa95f1ec3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fxwgt" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.724946 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fxwgt\" (UID: \"efd41cb9-678e-43d9-8643-b5aa95f1ec3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fxwgt" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.726872 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fxwgt\" (UID: \"efd41cb9-678e-43d9-8643-b5aa95f1ec3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fxwgt" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.747699 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8lcv\" (UniqueName: \"kubernetes.io/projected/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-kube-api-access-q8lcv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fxwgt\" (UID: \"efd41cb9-678e-43d9-8643-b5aa95f1ec3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fxwgt" Mar 20 16:15:13 crc kubenswrapper[4730]: I0320 16:15:13.778009 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fxwgt" Mar 20 16:15:14 crc kubenswrapper[4730]: I0320 16:15:14.271888 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-fxwgt"] Mar 20 16:15:14 crc kubenswrapper[4730]: I0320 16:15:14.278643 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 16:15:14 crc kubenswrapper[4730]: I0320 16:15:14.366415 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fxwgt" event={"ID":"efd41cb9-678e-43d9-8643-b5aa95f1ec3e","Type":"ContainerStarted","Data":"2ef45405b82ea3ab2a346b005f8ce69fed405496721b84e102d2e54946ebc3d7"} Mar 20 16:15:16 crc kubenswrapper[4730]: I0320 16:15:16.387988 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fxwgt" event={"ID":"efd41cb9-678e-43d9-8643-b5aa95f1ec3e","Type":"ContainerStarted","Data":"cb32fa207a288ed830e19ca81c260eb403869416d2b909e2c71baf95e1baaba1"} Mar 20 16:15:16 crc kubenswrapper[4730]: I0320 16:15:16.411060 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fxwgt" podStartSLOduration=2.492880686 podStartE2EDuration="3.411041336s" podCreationTimestamp="2026-03-20 16:15:13 +0000 UTC" firstStartedPulling="2026-03-20 16:15:14.278337745 +0000 UTC m=+2173.491709124" lastFinishedPulling="2026-03-20 16:15:15.196498405 +0000 UTC m=+2174.409869774" observedRunningTime="2026-03-20 16:15:16.407467645 +0000 UTC m=+2175.620839014" watchObservedRunningTime="2026-03-20 16:15:16.411041336 +0000 UTC m=+2175.624412705" Mar 20 16:15:49 crc kubenswrapper[4730]: I0320 16:15:49.151969 4730 scope.go:117] "RemoveContainer" containerID="647092b460bb07570b06908ca4f98239d0470ba3df7bb23adf207cb830d51de7" Mar 20 16:16:00 crc kubenswrapper[4730]: I0320 16:16:00.152758 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567056-5hbgw"] Mar 20 16:16:00 crc kubenswrapper[4730]: I0320 16:16:00.154403 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567056-5hbgw" Mar 20 16:16:00 crc kubenswrapper[4730]: I0320 16:16:00.156344 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:16:00 crc kubenswrapper[4730]: I0320 16:16:00.156765 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:16:00 crc kubenswrapper[4730]: I0320 16:16:00.157318 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:16:00 crc kubenswrapper[4730]: I0320 16:16:00.176156 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567056-5hbgw"] Mar 20 16:16:00 crc kubenswrapper[4730]: I0320 16:16:00.309089 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xgkh\" (UniqueName: \"kubernetes.io/projected/331a4cf6-7d2c-4540-9686-064f27fee0cc-kube-api-access-7xgkh\") pod \"auto-csr-approver-29567056-5hbgw\" (UID: \"331a4cf6-7d2c-4540-9686-064f27fee0cc\") " pod="openshift-infra/auto-csr-approver-29567056-5hbgw" Mar 20 16:16:00 crc kubenswrapper[4730]: I0320 16:16:00.410719 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xgkh\" (UniqueName: \"kubernetes.io/projected/331a4cf6-7d2c-4540-9686-064f27fee0cc-kube-api-access-7xgkh\") pod \"auto-csr-approver-29567056-5hbgw\" (UID: \"331a4cf6-7d2c-4540-9686-064f27fee0cc\") " pod="openshift-infra/auto-csr-approver-29567056-5hbgw" Mar 20 16:16:00 crc kubenswrapper[4730]: I0320 16:16:00.432942 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xgkh\" (UniqueName: \"kubernetes.io/projected/331a4cf6-7d2c-4540-9686-064f27fee0cc-kube-api-access-7xgkh\") pod \"auto-csr-approver-29567056-5hbgw\" (UID: \"331a4cf6-7d2c-4540-9686-064f27fee0cc\") " pod="openshift-infra/auto-csr-approver-29567056-5hbgw" Mar 20 16:16:00 crc kubenswrapper[4730]: I0320 16:16:00.470560 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567056-5hbgw" Mar 20 16:16:00 crc kubenswrapper[4730]: I0320 16:16:00.983295 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567056-5hbgw"] Mar 20 16:16:01 crc kubenswrapper[4730]: I0320 16:16:01.849749 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567056-5hbgw" event={"ID":"331a4cf6-7d2c-4540-9686-064f27fee0cc","Type":"ContainerStarted","Data":"37e797fe97430638fe1123d2140225c9dd6493a8ddfc738a4bc98b5e0be148ae"} Mar 20 16:16:02 crc kubenswrapper[4730]: I0320 16:16:02.861887 4730 generic.go:334] "Generic (PLEG): container finished" podID="331a4cf6-7d2c-4540-9686-064f27fee0cc" containerID="a7f958f4919aca64c652d826ae81971e8c020ebd62d216c39f71ac76d6e91ef4" exitCode=0 Mar 20 16:16:02 crc kubenswrapper[4730]: I0320 16:16:02.862351 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567056-5hbgw" event={"ID":"331a4cf6-7d2c-4540-9686-064f27fee0cc","Type":"ContainerDied","Data":"a7f958f4919aca64c652d826ae81971e8c020ebd62d216c39f71ac76d6e91ef4"} Mar 20 16:16:04 crc kubenswrapper[4730]: I0320 16:16:04.262421 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567056-5hbgw" Mar 20 16:16:04 crc kubenswrapper[4730]: I0320 16:16:04.386658 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xgkh\" (UniqueName: \"kubernetes.io/projected/331a4cf6-7d2c-4540-9686-064f27fee0cc-kube-api-access-7xgkh\") pod \"331a4cf6-7d2c-4540-9686-064f27fee0cc\" (UID: \"331a4cf6-7d2c-4540-9686-064f27fee0cc\") " Mar 20 16:16:04 crc kubenswrapper[4730]: I0320 16:16:04.393636 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/331a4cf6-7d2c-4540-9686-064f27fee0cc-kube-api-access-7xgkh" (OuterVolumeSpecName: "kube-api-access-7xgkh") pod "331a4cf6-7d2c-4540-9686-064f27fee0cc" (UID: "331a4cf6-7d2c-4540-9686-064f27fee0cc"). InnerVolumeSpecName "kube-api-access-7xgkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:16:04 crc kubenswrapper[4730]: I0320 16:16:04.488932 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xgkh\" (UniqueName: \"kubernetes.io/projected/331a4cf6-7d2c-4540-9686-064f27fee0cc-kube-api-access-7xgkh\") on node \"crc\" DevicePath \"\"" Mar 20 16:16:04 crc kubenswrapper[4730]: I0320 16:16:04.886407 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567056-5hbgw" event={"ID":"331a4cf6-7d2c-4540-9686-064f27fee0cc","Type":"ContainerDied","Data":"37e797fe97430638fe1123d2140225c9dd6493a8ddfc738a4bc98b5e0be148ae"} Mar 20 16:16:04 crc kubenswrapper[4730]: I0320 16:16:04.886441 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567056-5hbgw" Mar 20 16:16:04 crc kubenswrapper[4730]: I0320 16:16:04.886453 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37e797fe97430638fe1123d2140225c9dd6493a8ddfc738a4bc98b5e0be148ae" Mar 20 16:16:05 crc kubenswrapper[4730]: I0320 16:16:05.327735 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567050-7gqln"] Mar 20 16:16:05 crc kubenswrapper[4730]: I0320 16:16:05.337748 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567050-7gqln"] Mar 20 16:16:05 crc kubenswrapper[4730]: I0320 16:16:05.544513 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d689cf2-4142-40fd-9af3-13b98b99296d" path="/var/lib/kubelet/pods/9d689cf2-4142-40fd-9af3-13b98b99296d/volumes" Mar 20 16:16:12 crc kubenswrapper[4730]: I0320 16:16:12.879839 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:16:12 crc kubenswrapper[4730]: I0320 16:16:12.881740 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:16:19 crc kubenswrapper[4730]: I0320 16:16:19.007507 4730 generic.go:334] "Generic (PLEG): container finished" podID="efd41cb9-678e-43d9-8643-b5aa95f1ec3e" containerID="cb32fa207a288ed830e19ca81c260eb403869416d2b909e2c71baf95e1baaba1" exitCode=0 Mar 20 16:16:19 crc kubenswrapper[4730]: I0320 16:16:19.008397 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fxwgt" event={"ID":"efd41cb9-678e-43d9-8643-b5aa95f1ec3e","Type":"ContainerDied","Data":"cb32fa207a288ed830e19ca81c260eb403869416d2b909e2c71baf95e1baaba1"} Mar 20 16:16:20 crc kubenswrapper[4730]: I0320 16:16:20.416641 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fxwgt" Mar 20 16:16:20 crc kubenswrapper[4730]: I0320 16:16:20.500835 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-inventory\") pod \"efd41cb9-678e-43d9-8643-b5aa95f1ec3e\" (UID: \"efd41cb9-678e-43d9-8643-b5aa95f1ec3e\") " Mar 20 16:16:20 crc kubenswrapper[4730]: I0320 16:16:20.500927 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-ssh-key-openstack-edpm-ipam\") pod \"efd41cb9-678e-43d9-8643-b5aa95f1ec3e\" (UID: \"efd41cb9-678e-43d9-8643-b5aa95f1ec3e\") " Mar 20 16:16:20 crc kubenswrapper[4730]: I0320 16:16:20.501065 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8lcv\" (UniqueName: \"kubernetes.io/projected/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-kube-api-access-q8lcv\") pod \"efd41cb9-678e-43d9-8643-b5aa95f1ec3e\" (UID: \"efd41cb9-678e-43d9-8643-b5aa95f1ec3e\") " Mar 20 16:16:20 crc kubenswrapper[4730]: I0320 16:16:20.501099 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-ovncontroller-config-0\") pod \"efd41cb9-678e-43d9-8643-b5aa95f1ec3e\" (UID: \"efd41cb9-678e-43d9-8643-b5aa95f1ec3e\") " Mar 20 16:16:20 crc kubenswrapper[4730]: I0320 16:16:20.501152 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-ovn-combined-ca-bundle\") pod \"efd41cb9-678e-43d9-8643-b5aa95f1ec3e\" (UID: \"efd41cb9-678e-43d9-8643-b5aa95f1ec3e\") " Mar 20 16:16:20 crc kubenswrapper[4730]: I0320 16:16:20.512965 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "efd41cb9-678e-43d9-8643-b5aa95f1ec3e" (UID: "efd41cb9-678e-43d9-8643-b5aa95f1ec3e"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:16:20 crc kubenswrapper[4730]: I0320 16:16:20.513713 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-kube-api-access-q8lcv" (OuterVolumeSpecName: "kube-api-access-q8lcv") pod "efd41cb9-678e-43d9-8643-b5aa95f1ec3e" (UID: "efd41cb9-678e-43d9-8643-b5aa95f1ec3e"). InnerVolumeSpecName "kube-api-access-q8lcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:16:20 crc kubenswrapper[4730]: I0320 16:16:20.528910 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "efd41cb9-678e-43d9-8643-b5aa95f1ec3e" (UID: "efd41cb9-678e-43d9-8643-b5aa95f1ec3e"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:16:20 crc kubenswrapper[4730]: I0320 16:16:20.533212 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "efd41cb9-678e-43d9-8643-b5aa95f1ec3e" (UID: "efd41cb9-678e-43d9-8643-b5aa95f1ec3e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:16:20 crc kubenswrapper[4730]: I0320 16:16:20.536430 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-inventory" (OuterVolumeSpecName: "inventory") pod "efd41cb9-678e-43d9-8643-b5aa95f1ec3e" (UID: "efd41cb9-678e-43d9-8643-b5aa95f1ec3e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:16:20 crc kubenswrapper[4730]: I0320 16:16:20.603726 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8lcv\" (UniqueName: \"kubernetes.io/projected/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-kube-api-access-q8lcv\") on node \"crc\" DevicePath \"\"" Mar 20 16:16:20 crc kubenswrapper[4730]: I0320 16:16:20.603762 4730 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Mar 20 16:16:20 crc kubenswrapper[4730]: I0320 16:16:20.603771 4730 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:16:20 crc kubenswrapper[4730]: I0320 16:16:20.603779 4730 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-inventory\") on node \"crc\" DevicePath \"\"" Mar 20 16:16:20 crc kubenswrapper[4730]: I0320 16:16:20.603787 4730 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efd41cb9-678e-43d9-8643-b5aa95f1ec3e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.032318 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fxwgt" event={"ID":"efd41cb9-678e-43d9-8643-b5aa95f1ec3e","Type":"ContainerDied","Data":"2ef45405b82ea3ab2a346b005f8ce69fed405496721b84e102d2e54946ebc3d7"} Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.032366 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ef45405b82ea3ab2a346b005f8ce69fed405496721b84e102d2e54946ebc3d7" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.032370 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fxwgt" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.129968 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw"] Mar 20 16:16:21 crc kubenswrapper[4730]: E0320 16:16:21.130508 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="331a4cf6-7d2c-4540-9686-064f27fee0cc" containerName="oc" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.130532 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="331a4cf6-7d2c-4540-9686-064f27fee0cc" containerName="oc" Mar 20 16:16:21 crc kubenswrapper[4730]: E0320 16:16:21.130554 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efd41cb9-678e-43d9-8643-b5aa95f1ec3e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.130563 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="efd41cb9-678e-43d9-8643-b5aa95f1ec3e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.130794 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="331a4cf6-7d2c-4540-9686-064f27fee0cc" containerName="oc" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.130819 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="efd41cb9-678e-43d9-8643-b5aa95f1ec3e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.131621 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.134930 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.134947 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.135053 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.135102 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.135138 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.135226 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvsxx" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.157191 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw"] Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.215576 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw\" (UID: \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.215919 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw\" (UID: \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.215955 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw\" (UID: \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.215997 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw\" (UID: \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.216031 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqlr8\" (UniqueName: \"kubernetes.io/projected/74d70014-6de1-4d90-b04a-8f8376d3a9e0-kube-api-access-bqlr8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw\" (UID: \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.216103 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw\" (UID: \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.318008 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw\" (UID: \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.318058 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw\" (UID: \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.318089 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw\" (UID: \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.318110 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw\" (UID: \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.318808 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqlr8\" (UniqueName: \"kubernetes.io/projected/74d70014-6de1-4d90-b04a-8f8376d3a9e0-kube-api-access-bqlr8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw\" (UID: \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.318877 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw\" (UID: \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.323310 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw\" (UID: \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.323524 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw\" (UID: \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.323938 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw\" (UID: \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.324125 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw\" (UID: \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.324943 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw\" (UID: \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.341211 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqlr8\" (UniqueName: \"kubernetes.io/projected/74d70014-6de1-4d90-b04a-8f8376d3a9e0-kube-api-access-bqlr8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw\" (UID: \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw" Mar 20 16:16:21 crc kubenswrapper[4730]: I0320 16:16:21.493157 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw" Mar 20 16:16:22 crc kubenswrapper[4730]: I0320 16:16:22.190178 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw"] Mar 20 16:16:23 crc kubenswrapper[4730]: I0320 16:16:23.050784 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw" event={"ID":"74d70014-6de1-4d90-b04a-8f8376d3a9e0","Type":"ContainerStarted","Data":"50125756b79249e59512ad26ef5d86d2ed226aebe51736bf11f852d4e3237e00"} Mar 20 16:16:23 crc kubenswrapper[4730]: I0320 16:16:23.051391 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw" event={"ID":"74d70014-6de1-4d90-b04a-8f8376d3a9e0","Type":"ContainerStarted","Data":"0838107093eacd9101dc280734e03b7fb6444281744acf3eb05444ff1c5b5488"} Mar 20 16:16:23 crc kubenswrapper[4730]: I0320 16:16:23.068851 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw" podStartSLOduration=1.5380996420000002 podStartE2EDuration="2.068829783s" podCreationTimestamp="2026-03-20 16:16:21 +0000 UTC" firstStartedPulling="2026-03-20 16:16:22.1991371 +0000 UTC m=+2241.412508459" lastFinishedPulling="2026-03-20 16:16:22.729867231 +0000 UTC m=+2241.943238600" observedRunningTime="2026-03-20 16:16:23.064862811 +0000 UTC m=+2242.278234180" watchObservedRunningTime="2026-03-20 16:16:23.068829783 +0000 UTC m=+2242.282201152" Mar 20 16:16:42 crc kubenswrapper[4730]: I0320 16:16:42.880052 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:16:42 crc kubenswrapper[4730]: I0320 16:16:42.880400 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:16:49 crc kubenswrapper[4730]: I0320 16:16:49.244995 4730 scope.go:117] "RemoveContainer" containerID="41dec27fbddb23dabfba3fbf070ca912d2703e72f2511ee9ef62aa8a4e09aa09" Mar 20 16:17:11 crc kubenswrapper[4730]: I0320 16:17:11.548447 4730 generic.go:334] "Generic (PLEG): container finished" podID="74d70014-6de1-4d90-b04a-8f8376d3a9e0" containerID="50125756b79249e59512ad26ef5d86d2ed226aebe51736bf11f852d4e3237e00" exitCode=0 Mar 20 16:17:11 crc kubenswrapper[4730]: I0320 16:17:11.548512 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw" event={"ID":"74d70014-6de1-4d90-b04a-8f8376d3a9e0","Type":"ContainerDied","Data":"50125756b79249e59512ad26ef5d86d2ed226aebe51736bf11f852d4e3237e00"} Mar 20 16:17:12 crc kubenswrapper[4730]: I0320 16:17:12.880748 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:17:12 crc kubenswrapper[4730]: I0320 16:17:12.881083 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:17:12 crc kubenswrapper[4730]: I0320 16:17:12.881131 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 16:17:12 crc kubenswrapper[4730]: I0320 16:17:12.881894 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b7dfce64faa161154e6afb28ae3e7685cc4caead60043a4bb51030ee7d13fb31"} pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 16:17:12 crc kubenswrapper[4730]: I0320 16:17:12.881950 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" containerID="cri-o://b7dfce64faa161154e6afb28ae3e7685cc4caead60043a4bb51030ee7d13fb31" gracePeriod=600 Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.051785 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.144418 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-ssh-key-openstack-edpm-ipam\") pod \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\" (UID: \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\") " Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.144611 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-neutron-metadata-combined-ca-bundle\") pod \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\" (UID: \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\") " Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.144668 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\" (UID: \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\") " Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.144706 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-nova-metadata-neutron-config-0\") pod \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\" (UID: \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\") " Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.144758 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqlr8\" (UniqueName: \"kubernetes.io/projected/74d70014-6de1-4d90-b04a-8f8376d3a9e0-kube-api-access-bqlr8\") pod \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\" (UID: \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\") " Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.144795 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-inventory\") pod \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\" (UID: \"74d70014-6de1-4d90-b04a-8f8376d3a9e0\") " Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.151430 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74d70014-6de1-4d90-b04a-8f8376d3a9e0-kube-api-access-bqlr8" (OuterVolumeSpecName: "kube-api-access-bqlr8") pod "74d70014-6de1-4d90-b04a-8f8376d3a9e0" (UID: "74d70014-6de1-4d90-b04a-8f8376d3a9e0"). InnerVolumeSpecName "kube-api-access-bqlr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.151773 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "74d70014-6de1-4d90-b04a-8f8376d3a9e0" (UID: "74d70014-6de1-4d90-b04a-8f8376d3a9e0"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.183643 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "74d70014-6de1-4d90-b04a-8f8376d3a9e0" (UID: "74d70014-6de1-4d90-b04a-8f8376d3a9e0"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.192360 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "74d70014-6de1-4d90-b04a-8f8376d3a9e0" (UID: "74d70014-6de1-4d90-b04a-8f8376d3a9e0"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.192411 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-inventory" (OuterVolumeSpecName: "inventory") pod "74d70014-6de1-4d90-b04a-8f8376d3a9e0" (UID: "74d70014-6de1-4d90-b04a-8f8376d3a9e0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.215832 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "74d70014-6de1-4d90-b04a-8f8376d3a9e0" (UID: "74d70014-6de1-4d90-b04a-8f8376d3a9e0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.246852 4730 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.247210 4730 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.247224 4730 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.247235 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqlr8\" (UniqueName: \"kubernetes.io/projected/74d70014-6de1-4d90-b04a-8f8376d3a9e0-kube-api-access-bqlr8\") on node \"crc\" DevicePath \"\"" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.247262 4730 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-inventory\") on node \"crc\" DevicePath \"\"" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.247276 4730 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74d70014-6de1-4d90-b04a-8f8376d3a9e0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.576020 4730 generic.go:334] "Generic (PLEG): container finished" podID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerID="b7dfce64faa161154e6afb28ae3e7685cc4caead60043a4bb51030ee7d13fb31" exitCode=0 Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.576118 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerDied","Data":"b7dfce64faa161154e6afb28ae3e7685cc4caead60043a4bb51030ee7d13fb31"} Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.576173 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerStarted","Data":"7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213"} Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.576192 4730 scope.go:117] "RemoveContainer" containerID="4a5122c368d04f6b91f59fb7f1f1be3d99580fb45609b2315a001469955ae120" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.580204 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw" event={"ID":"74d70014-6de1-4d90-b04a-8f8376d3a9e0","Type":"ContainerDied","Data":"0838107093eacd9101dc280734e03b7fb6444281744acf3eb05444ff1c5b5488"} Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.580227 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0838107093eacd9101dc280734e03b7fb6444281744acf3eb05444ff1c5b5488" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.580360 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.701027 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj"] Mar 20 16:17:13 crc kubenswrapper[4730]: E0320 16:17:13.701548 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74d70014-6de1-4d90-b04a-8f8376d3a9e0" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.701571 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d70014-6de1-4d90-b04a-8f8376d3a9e0" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.701820 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="74d70014-6de1-4d90-b04a-8f8376d3a9e0" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.702485 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.705903 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvsxx" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.706139 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.706492 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.706635 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.706785 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.715775 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj"] Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.858056 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d453db-c8fb-438d-927e-6eaee8383df1-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj\" (UID: \"43d453db-c8fb-438d-927e-6eaee8383df1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.859134 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43d453db-c8fb-438d-927e-6eaee8383df1-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj\" (UID: \"43d453db-c8fb-438d-927e-6eaee8383df1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.859193 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/43d453db-c8fb-438d-927e-6eaee8383df1-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj\" (UID: \"43d453db-c8fb-438d-927e-6eaee8383df1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.859212 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43d453db-c8fb-438d-927e-6eaee8383df1-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj\" (UID: \"43d453db-c8fb-438d-927e-6eaee8383df1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.859236 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx8mb\" (UniqueName: \"kubernetes.io/projected/43d453db-c8fb-438d-927e-6eaee8383df1-kube-api-access-nx8mb\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj\" (UID: \"43d453db-c8fb-438d-927e-6eaee8383df1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.961835 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d453db-c8fb-438d-927e-6eaee8383df1-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj\" (UID: \"43d453db-c8fb-438d-927e-6eaee8383df1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.962028 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43d453db-c8fb-438d-927e-6eaee8383df1-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj\" (UID: \"43d453db-c8fb-438d-927e-6eaee8383df1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.962078 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43d453db-c8fb-438d-927e-6eaee8383df1-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj\" (UID: \"43d453db-c8fb-438d-927e-6eaee8383df1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.962102 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/43d453db-c8fb-438d-927e-6eaee8383df1-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj\" (UID: \"43d453db-c8fb-438d-927e-6eaee8383df1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.962134 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx8mb\" (UniqueName: \"kubernetes.io/projected/43d453db-c8fb-438d-927e-6eaee8383df1-kube-api-access-nx8mb\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj\" (UID: \"43d453db-c8fb-438d-927e-6eaee8383df1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.967018 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43d453db-c8fb-438d-927e-6eaee8383df1-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj\" (UID: \"43d453db-c8fb-438d-927e-6eaee8383df1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.967379 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43d453db-c8fb-438d-927e-6eaee8383df1-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj\" (UID: \"43d453db-c8fb-438d-927e-6eaee8383df1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.967578 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/43d453db-c8fb-438d-927e-6eaee8383df1-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj\" (UID: \"43d453db-c8fb-438d-927e-6eaee8383df1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.970995 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d453db-c8fb-438d-927e-6eaee8383df1-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj\" (UID: \"43d453db-c8fb-438d-927e-6eaee8383df1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj" Mar 20 16:17:13 crc kubenswrapper[4730]: I0320 16:17:13.979678 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx8mb\" (UniqueName: \"kubernetes.io/projected/43d453db-c8fb-438d-927e-6eaee8383df1-kube-api-access-nx8mb\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj\" (UID: \"43d453db-c8fb-438d-927e-6eaee8383df1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj" Mar 20 16:17:14 crc kubenswrapper[4730]: I0320 16:17:14.037773 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj" Mar 20 16:17:14 crc kubenswrapper[4730]: W0320 16:17:14.580823 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43d453db_c8fb_438d_927e_6eaee8383df1.slice/crio-4e9ce008b0379b201268b642fefda0155599bddcd66e572dcdc376e727bf72f7 WatchSource:0}: Error finding container 4e9ce008b0379b201268b642fefda0155599bddcd66e572dcdc376e727bf72f7: Status 404 returned error can't find the container with id 4e9ce008b0379b201268b642fefda0155599bddcd66e572dcdc376e727bf72f7 Mar 20 16:17:14 crc kubenswrapper[4730]: I0320 16:17:14.581138 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj"] Mar 20 16:17:15 crc kubenswrapper[4730]: I0320 16:17:15.604447 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj" event={"ID":"43d453db-c8fb-438d-927e-6eaee8383df1","Type":"ContainerStarted","Data":"376af442bf9ebfb7a32ce79a2f7ed1f7e468476d5ad13d8d73708b12ea13c232"} Mar 20 16:17:15 crc kubenswrapper[4730]: I0320 16:17:15.605016 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj" event={"ID":"43d453db-c8fb-438d-927e-6eaee8383df1","Type":"ContainerStarted","Data":"4e9ce008b0379b201268b642fefda0155599bddcd66e572dcdc376e727bf72f7"} Mar 20 16:17:15 crc kubenswrapper[4730]: I0320 16:17:15.654219 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj" podStartSLOduration=2.205956246 podStartE2EDuration="2.654198087s" podCreationTimestamp="2026-03-20 16:17:13 +0000 UTC" firstStartedPulling="2026-03-20 16:17:14.58443459 +0000 UTC m=+2293.797805959" lastFinishedPulling="2026-03-20 16:17:15.032676401 +0000 UTC m=+2294.246047800" observedRunningTime="2026-03-20 16:17:15.649424261 +0000 UTC m=+2294.862795640" watchObservedRunningTime="2026-03-20 16:17:15.654198087 +0000 UTC m=+2294.867569456" Mar 20 16:18:00 crc kubenswrapper[4730]: I0320 16:18:00.152414 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567058-dbb42"] Mar 20 16:18:00 crc kubenswrapper[4730]: I0320 16:18:00.154614 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567058-dbb42" Mar 20 16:18:00 crc kubenswrapper[4730]: I0320 16:18:00.156489 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:18:00 crc kubenswrapper[4730]: I0320 16:18:00.156491 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:18:00 crc kubenswrapper[4730]: I0320 16:18:00.156973 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:18:00 crc kubenswrapper[4730]: I0320 16:18:00.163602 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567058-dbb42"] Mar 20 16:18:00 crc kubenswrapper[4730]: I0320 16:18:00.208750 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hh67\" (UniqueName: \"kubernetes.io/projected/3e26c2d2-d860-4e2a-b8e9-d607220b44f7-kube-api-access-5hh67\") pod \"auto-csr-approver-29567058-dbb42\" (UID: \"3e26c2d2-d860-4e2a-b8e9-d607220b44f7\") " pod="openshift-infra/auto-csr-approver-29567058-dbb42" Mar 20 16:18:00 crc kubenswrapper[4730]: I0320 16:18:00.311031 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hh67\" (UniqueName: \"kubernetes.io/projected/3e26c2d2-d860-4e2a-b8e9-d607220b44f7-kube-api-access-5hh67\") pod \"auto-csr-approver-29567058-dbb42\" (UID: \"3e26c2d2-d860-4e2a-b8e9-d607220b44f7\") " pod="openshift-infra/auto-csr-approver-29567058-dbb42" Mar 20 16:18:00 crc kubenswrapper[4730]: I0320 16:18:00.331764 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hh67\" (UniqueName: \"kubernetes.io/projected/3e26c2d2-d860-4e2a-b8e9-d607220b44f7-kube-api-access-5hh67\") pod \"auto-csr-approver-29567058-dbb42\" (UID: \"3e26c2d2-d860-4e2a-b8e9-d607220b44f7\") " pod="openshift-infra/auto-csr-approver-29567058-dbb42" Mar 20 16:18:00 crc kubenswrapper[4730]: I0320 16:18:00.481295 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567058-dbb42" Mar 20 16:18:00 crc kubenswrapper[4730]: I0320 16:18:00.933610 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567058-dbb42"] Mar 20 16:18:01 crc kubenswrapper[4730]: I0320 16:18:01.048541 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567058-dbb42" event={"ID":"3e26c2d2-d860-4e2a-b8e9-d607220b44f7","Type":"ContainerStarted","Data":"87c535f2369abdc09480d5585fa4bc3a0c172bc32e1112ce544c29239d4f5deb"} Mar 20 16:18:03 crc kubenswrapper[4730]: I0320 16:18:03.077179 4730 generic.go:334] "Generic (PLEG): container finished" podID="3e26c2d2-d860-4e2a-b8e9-d607220b44f7" containerID="7c5e3ddc85e2a694aa4b6f5091893d6172a7cc9cc61ecbf5665992a51d9cf392" exitCode=0 Mar 20 16:18:03 crc kubenswrapper[4730]: I0320 16:18:03.077307 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567058-dbb42" event={"ID":"3e26c2d2-d860-4e2a-b8e9-d607220b44f7","Type":"ContainerDied","Data":"7c5e3ddc85e2a694aa4b6f5091893d6172a7cc9cc61ecbf5665992a51d9cf392"} Mar 20 16:18:04 crc kubenswrapper[4730]: I0320 16:18:04.418522 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567058-dbb42" Mar 20 16:18:04 crc kubenswrapper[4730]: I0320 16:18:04.487750 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hh67\" (UniqueName: \"kubernetes.io/projected/3e26c2d2-d860-4e2a-b8e9-d607220b44f7-kube-api-access-5hh67\") pod \"3e26c2d2-d860-4e2a-b8e9-d607220b44f7\" (UID: \"3e26c2d2-d860-4e2a-b8e9-d607220b44f7\") " Mar 20 16:18:04 crc kubenswrapper[4730]: I0320 16:18:04.493518 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e26c2d2-d860-4e2a-b8e9-d607220b44f7-kube-api-access-5hh67" (OuterVolumeSpecName: "kube-api-access-5hh67") pod "3e26c2d2-d860-4e2a-b8e9-d607220b44f7" (UID: "3e26c2d2-d860-4e2a-b8e9-d607220b44f7"). InnerVolumeSpecName "kube-api-access-5hh67". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:18:04 crc kubenswrapper[4730]: I0320 16:18:04.592090 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hh67\" (UniqueName: \"kubernetes.io/projected/3e26c2d2-d860-4e2a-b8e9-d607220b44f7-kube-api-access-5hh67\") on node \"crc\" DevicePath \"\"" Mar 20 16:18:05 crc kubenswrapper[4730]: I0320 16:18:05.099239 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567058-dbb42" event={"ID":"3e26c2d2-d860-4e2a-b8e9-d607220b44f7","Type":"ContainerDied","Data":"87c535f2369abdc09480d5585fa4bc3a0c172bc32e1112ce544c29239d4f5deb"} Mar 20 16:18:05 crc kubenswrapper[4730]: I0320 16:18:05.099309 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87c535f2369abdc09480d5585fa4bc3a0c172bc32e1112ce544c29239d4f5deb" Mar 20 16:18:05 crc kubenswrapper[4730]: I0320 16:18:05.099376 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567058-dbb42" Mar 20 16:18:05 crc kubenswrapper[4730]: E0320 16:18:05.200025 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e26c2d2_d860_4e2a_b8e9_d607220b44f7.slice\": RecentStats: unable to find data in memory cache]" Mar 20 16:18:05 crc kubenswrapper[4730]: I0320 16:18:05.484653 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567052-fb4zl"] Mar 20 16:18:05 crc kubenswrapper[4730]: I0320 16:18:05.494281 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567052-fb4zl"] Mar 20 16:18:05 crc kubenswrapper[4730]: I0320 16:18:05.546555 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28beb66f-2a64-4bcf-94eb-676ef7f1236a" path="/var/lib/kubelet/pods/28beb66f-2a64-4bcf-94eb-676ef7f1236a/volumes" Mar 20 16:18:49 crc kubenswrapper[4730]: I0320 16:18:49.376284 4730 scope.go:117] "RemoveContainer" containerID="09bf1a5b6b98230c97ec660d74eb6fc018c3a7b8b7105355e719108bd3861003" Mar 20 16:19:42 crc kubenswrapper[4730]: I0320 16:19:42.880345 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:19:42 crc kubenswrapper[4730]: I0320 16:19:42.881172 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:20:00 crc kubenswrapper[4730]: I0320 16:20:00.139772 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567060-dbzpk"] Mar 20 16:20:00 crc kubenswrapper[4730]: E0320 16:20:00.140696 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e26c2d2-d860-4e2a-b8e9-d607220b44f7" containerName="oc" Mar 20 16:20:00 crc kubenswrapper[4730]: I0320 16:20:00.140707 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e26c2d2-d860-4e2a-b8e9-d607220b44f7" containerName="oc" Mar 20 16:20:00 crc kubenswrapper[4730]: I0320 16:20:00.140913 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e26c2d2-d860-4e2a-b8e9-d607220b44f7" containerName="oc" Mar 20 16:20:00 crc kubenswrapper[4730]: I0320 16:20:00.141625 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567060-dbzpk" Mar 20 16:20:00 crc kubenswrapper[4730]: I0320 16:20:00.144055 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:20:00 crc kubenswrapper[4730]: I0320 16:20:00.144613 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:20:00 crc kubenswrapper[4730]: I0320 16:20:00.145052 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:20:00 crc kubenswrapper[4730]: I0320 16:20:00.151144 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567060-dbzpk"] Mar 20 16:20:00 crc kubenswrapper[4730]: I0320 16:20:00.275797 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz44r\" (UniqueName: \"kubernetes.io/projected/95b03afb-153e-4f6b-a88a-e64d8a889b97-kube-api-access-sz44r\") pod \"auto-csr-approver-29567060-dbzpk\" (UID: \"95b03afb-153e-4f6b-a88a-e64d8a889b97\") " pod="openshift-infra/auto-csr-approver-29567060-dbzpk" Mar 20 16:20:00 crc kubenswrapper[4730]: I0320 16:20:00.377770 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz44r\" (UniqueName: \"kubernetes.io/projected/95b03afb-153e-4f6b-a88a-e64d8a889b97-kube-api-access-sz44r\") pod \"auto-csr-approver-29567060-dbzpk\" (UID: \"95b03afb-153e-4f6b-a88a-e64d8a889b97\") " pod="openshift-infra/auto-csr-approver-29567060-dbzpk" Mar 20 16:20:00 crc kubenswrapper[4730]: I0320 16:20:00.397035 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz44r\" (UniqueName: \"kubernetes.io/projected/95b03afb-153e-4f6b-a88a-e64d8a889b97-kube-api-access-sz44r\") pod \"auto-csr-approver-29567060-dbzpk\" (UID: \"95b03afb-153e-4f6b-a88a-e64d8a889b97\") " pod="openshift-infra/auto-csr-approver-29567060-dbzpk" Mar 20 16:20:00 crc kubenswrapper[4730]: I0320 16:20:00.473427 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567060-dbzpk" Mar 20 16:20:00 crc kubenswrapper[4730]: I0320 16:20:00.913695 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567060-dbzpk"] Mar 20 16:20:01 crc kubenswrapper[4730]: I0320 16:20:01.209019 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567060-dbzpk" event={"ID":"95b03afb-153e-4f6b-a88a-e64d8a889b97","Type":"ContainerStarted","Data":"0569d41e4f19d35f7cf3bc7363d39efefe7685a284238a7673c21960e8372c4d"} Mar 20 16:20:03 crc kubenswrapper[4730]: I0320 16:20:03.231926 4730 generic.go:334] "Generic (PLEG): container finished" podID="95b03afb-153e-4f6b-a88a-e64d8a889b97" containerID="fea91f3e29a9829ef950c2b8d1b25f02c8cfb65d5e94d7557e912ffa33559bad" exitCode=0 Mar 20 16:20:03 crc kubenswrapper[4730]: I0320 16:20:03.232107 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567060-dbzpk" event={"ID":"95b03afb-153e-4f6b-a88a-e64d8a889b97","Type":"ContainerDied","Data":"fea91f3e29a9829ef950c2b8d1b25f02c8cfb65d5e94d7557e912ffa33559bad"} Mar 20 16:20:04 crc kubenswrapper[4730]: I0320 16:20:04.548686 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567060-dbzpk" Mar 20 16:20:04 crc kubenswrapper[4730]: I0320 16:20:04.668139 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz44r\" (UniqueName: \"kubernetes.io/projected/95b03afb-153e-4f6b-a88a-e64d8a889b97-kube-api-access-sz44r\") pod \"95b03afb-153e-4f6b-a88a-e64d8a889b97\" (UID: \"95b03afb-153e-4f6b-a88a-e64d8a889b97\") " Mar 20 16:20:04 crc kubenswrapper[4730]: I0320 16:20:04.677481 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95b03afb-153e-4f6b-a88a-e64d8a889b97-kube-api-access-sz44r" (OuterVolumeSpecName: "kube-api-access-sz44r") pod "95b03afb-153e-4f6b-a88a-e64d8a889b97" (UID: "95b03afb-153e-4f6b-a88a-e64d8a889b97"). InnerVolumeSpecName "kube-api-access-sz44r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:20:04 crc kubenswrapper[4730]: I0320 16:20:04.770196 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz44r\" (UniqueName: \"kubernetes.io/projected/95b03afb-153e-4f6b-a88a-e64d8a889b97-kube-api-access-sz44r\") on node \"crc\" DevicePath \"\"" Mar 20 16:20:05 crc kubenswrapper[4730]: I0320 16:20:05.254773 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567060-dbzpk" event={"ID":"95b03afb-153e-4f6b-a88a-e64d8a889b97","Type":"ContainerDied","Data":"0569d41e4f19d35f7cf3bc7363d39efefe7685a284238a7673c21960e8372c4d"} Mar 20 16:20:05 crc kubenswrapper[4730]: I0320 16:20:05.254845 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0569d41e4f19d35f7cf3bc7363d39efefe7685a284238a7673c21960e8372c4d" Mar 20 16:20:05 crc kubenswrapper[4730]: I0320 16:20:05.254893 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567060-dbzpk" Mar 20 16:20:05 crc kubenswrapper[4730]: I0320 16:20:05.621438 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567054-glf2f"] Mar 20 16:20:05 crc kubenswrapper[4730]: I0320 16:20:05.630674 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567054-glf2f"] Mar 20 16:20:07 crc kubenswrapper[4730]: I0320 16:20:07.545650 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d62c2430-2f2b-49f0-848a-015a72d04090" path="/var/lib/kubelet/pods/d62c2430-2f2b-49f0-848a-015a72d04090/volumes" Mar 20 16:20:12 crc kubenswrapper[4730]: I0320 16:20:12.879774 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:20:12 crc kubenswrapper[4730]: I0320 16:20:12.880417 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:20:42 crc kubenswrapper[4730]: I0320 16:20:42.880300 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:20:42 crc kubenswrapper[4730]: I0320 16:20:42.880800 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:20:42 crc kubenswrapper[4730]: I0320 16:20:42.880836 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 16:20:42 crc kubenswrapper[4730]: I0320 16:20:42.881520 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213"} pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 16:20:42 crc kubenswrapper[4730]: I0320 16:20:42.881561 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" containerID="cri-o://7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213" gracePeriod=600 Mar 20 16:20:43 crc kubenswrapper[4730]: E0320 16:20:43.012319 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:20:43 crc kubenswrapper[4730]: I0320 16:20:43.652322 4730 generic.go:334] "Generic (PLEG): container finished" podID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerID="7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213" exitCode=0 Mar 20 16:20:43 crc kubenswrapper[4730]: I0320 16:20:43.652398 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerDied","Data":"7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213"} Mar 20 16:20:43 crc kubenswrapper[4730]: I0320 16:20:43.652866 4730 scope.go:117] "RemoveContainer" containerID="b7dfce64faa161154e6afb28ae3e7685cc4caead60043a4bb51030ee7d13fb31" Mar 20 16:20:43 crc kubenswrapper[4730]: I0320 16:20:43.653577 4730 scope.go:117] "RemoveContainer" containerID="7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213" Mar 20 16:20:43 crc kubenswrapper[4730]: E0320 16:20:43.654053 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:20:49 crc kubenswrapper[4730]: I0320 16:20:49.495645 4730 scope.go:117] "RemoveContainer" containerID="88155d5b3d3f84b9a79ccb85b9d478d5415c51d7944f4bb78a548434ba4fb653" Mar 20 16:20:57 crc kubenswrapper[4730]: I0320 16:20:57.534178 4730 scope.go:117] "RemoveContainer" containerID="7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213" Mar 20 16:20:57 crc kubenswrapper[4730]: E0320 16:20:57.534920 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:21:09 crc kubenswrapper[4730]: I0320 16:21:09.913789 4730 generic.go:334] "Generic (PLEG): container finished" podID="43d453db-c8fb-438d-927e-6eaee8383df1" containerID="376af442bf9ebfb7a32ce79a2f7ed1f7e468476d5ad13d8d73708b12ea13c232" exitCode=0 Mar 20 16:21:09 crc kubenswrapper[4730]: I0320 16:21:09.913868 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj" event={"ID":"43d453db-c8fb-438d-927e-6eaee8383df1","Type":"ContainerDied","Data":"376af442bf9ebfb7a32ce79a2f7ed1f7e468476d5ad13d8d73708b12ea13c232"} Mar 20 16:21:11 crc kubenswrapper[4730]: I0320 16:21:11.323557 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj" Mar 20 16:21:11 crc kubenswrapper[4730]: I0320 16:21:11.405433 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43d453db-c8fb-438d-927e-6eaee8383df1-ssh-key-openstack-edpm-ipam\") pod \"43d453db-c8fb-438d-927e-6eaee8383df1\" (UID: \"43d453db-c8fb-438d-927e-6eaee8383df1\") " Mar 20 16:21:11 crc kubenswrapper[4730]: I0320 16:21:11.405534 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43d453db-c8fb-438d-927e-6eaee8383df1-inventory\") pod \"43d453db-c8fb-438d-927e-6eaee8383df1\" (UID: \"43d453db-c8fb-438d-927e-6eaee8383df1\") " Mar 20 16:21:11 crc kubenswrapper[4730]: I0320 16:21:11.405564 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d453db-c8fb-438d-927e-6eaee8383df1-libvirt-combined-ca-bundle\") pod \"43d453db-c8fb-438d-927e-6eaee8383df1\" (UID: \"43d453db-c8fb-438d-927e-6eaee8383df1\") " Mar 20 16:21:11 crc kubenswrapper[4730]: I0320 16:21:11.405630 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx8mb\" (UniqueName: \"kubernetes.io/projected/43d453db-c8fb-438d-927e-6eaee8383df1-kube-api-access-nx8mb\") pod \"43d453db-c8fb-438d-927e-6eaee8383df1\" (UID: \"43d453db-c8fb-438d-927e-6eaee8383df1\") " Mar 20 16:21:11 crc kubenswrapper[4730]: I0320 16:21:11.405657 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/43d453db-c8fb-438d-927e-6eaee8383df1-libvirt-secret-0\") pod \"43d453db-c8fb-438d-927e-6eaee8383df1\" (UID: \"43d453db-c8fb-438d-927e-6eaee8383df1\") " Mar 20 16:21:11 crc kubenswrapper[4730]: I0320 16:21:11.411050 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d453db-c8fb-438d-927e-6eaee8383df1-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "43d453db-c8fb-438d-927e-6eaee8383df1" (UID: "43d453db-c8fb-438d-927e-6eaee8383df1"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:21:11 crc kubenswrapper[4730]: I0320 16:21:11.411261 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43d453db-c8fb-438d-927e-6eaee8383df1-kube-api-access-nx8mb" (OuterVolumeSpecName: "kube-api-access-nx8mb") pod "43d453db-c8fb-438d-927e-6eaee8383df1" (UID: "43d453db-c8fb-438d-927e-6eaee8383df1"). InnerVolumeSpecName "kube-api-access-nx8mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:21:11 crc kubenswrapper[4730]: I0320 16:21:11.434785 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d453db-c8fb-438d-927e-6eaee8383df1-inventory" (OuterVolumeSpecName: "inventory") pod "43d453db-c8fb-438d-927e-6eaee8383df1" (UID: "43d453db-c8fb-438d-927e-6eaee8383df1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:21:11 crc kubenswrapper[4730]: I0320 16:21:11.435192 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d453db-c8fb-438d-927e-6eaee8383df1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "43d453db-c8fb-438d-927e-6eaee8383df1" (UID: "43d453db-c8fb-438d-927e-6eaee8383df1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:21:11 crc kubenswrapper[4730]: I0320 16:21:11.439011 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d453db-c8fb-438d-927e-6eaee8383df1-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "43d453db-c8fb-438d-927e-6eaee8383df1" (UID: "43d453db-c8fb-438d-927e-6eaee8383df1"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:21:11 crc kubenswrapper[4730]: I0320 16:21:11.508298 4730 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43d453db-c8fb-438d-927e-6eaee8383df1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 20 16:21:11 crc kubenswrapper[4730]: I0320 16:21:11.508329 4730 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43d453db-c8fb-438d-927e-6eaee8383df1-inventory\") on node \"crc\" DevicePath \"\"" Mar 20 16:21:11 crc kubenswrapper[4730]: I0320 16:21:11.508340 4730 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d453db-c8fb-438d-927e-6eaee8383df1-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:21:11 crc kubenswrapper[4730]: I0320 16:21:11.508349 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx8mb\" (UniqueName: \"kubernetes.io/projected/43d453db-c8fb-438d-927e-6eaee8383df1-kube-api-access-nx8mb\") on node \"crc\" DevicePath \"\"" Mar 20 16:21:11 crc kubenswrapper[4730]: I0320 16:21:11.508362 4730 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/43d453db-c8fb-438d-927e-6eaee8383df1-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Mar 20 16:21:11 crc kubenswrapper[4730]: I0320 16:21:11.541442 4730 scope.go:117] "RemoveContainer" containerID="7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213" Mar 20 16:21:11 crc kubenswrapper[4730]: E0320 16:21:11.541738 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:21:11 crc kubenswrapper[4730]: I0320 16:21:11.935758 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj" event={"ID":"43d453db-c8fb-438d-927e-6eaee8383df1","Type":"ContainerDied","Data":"4e9ce008b0379b201268b642fefda0155599bddcd66e572dcdc376e727bf72f7"} Mar 20 16:21:11 crc kubenswrapper[4730]: I0320 16:21:11.935804 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e9ce008b0379b201268b642fefda0155599bddcd66e572dcdc376e727bf72f7" Mar 20 16:21:11 crc kubenswrapper[4730]: I0320 16:21:11.935819 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.025154 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58"] Mar 20 16:21:12 crc kubenswrapper[4730]: E0320 16:21:12.025655 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95b03afb-153e-4f6b-a88a-e64d8a889b97" containerName="oc" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.025678 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="95b03afb-153e-4f6b-a88a-e64d8a889b97" containerName="oc" Mar 20 16:21:12 crc kubenswrapper[4730]: E0320 16:21:12.025703 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43d453db-c8fb-438d-927e-6eaee8383df1" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.025712 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="43d453db-c8fb-438d-927e-6eaee8383df1" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.025960 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="95b03afb-153e-4f6b-a88a-e64d8a889b97" containerName="oc" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.026003 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="43d453db-c8fb-438d-927e-6eaee8383df1" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.026903 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.029172 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.030287 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.030524 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.030697 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.030765 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvsxx" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.030845 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.031401 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.051136 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58"] Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.123232 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.123325 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkl8b\" (UniqueName: \"kubernetes.io/projected/6ffb462f-06f9-49df-bfe7-d41c274d4b05-kube-api-access-fkl8b\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.123430 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.123471 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.123496 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.123551 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.123737 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.123780 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.123946 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.124000 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.124081 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.225765 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.225901 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.225931 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkl8b\" (UniqueName: \"kubernetes.io/projected/6ffb462f-06f9-49df-bfe7-d41c274d4b05-kube-api-access-fkl8b\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.226512 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.226534 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.226556 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.226593 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.226709 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.226736 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.226841 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.226880 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.230083 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.230863 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.243300 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.255136 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.256048 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.256212 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.256738 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.264163 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.269657 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.270739 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkl8b\" (UniqueName: \"kubernetes.io/projected/6ffb462f-06f9-49df-bfe7-d41c274d4b05-kube-api-access-fkl8b\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.274159 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4t58\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.345325 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.952229 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 16:21:12 crc kubenswrapper[4730]: I0320 16:21:12.954757 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58"] Mar 20 16:21:13 crc kubenswrapper[4730]: I0320 16:21:13.970533 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" event={"ID":"6ffb462f-06f9-49df-bfe7-d41c274d4b05","Type":"ContainerStarted","Data":"08be8b8dbbd2047f288f6d426869c884bfb40bb5064e00dce56c6a11bd7559c8"} Mar 20 16:21:13 crc kubenswrapper[4730]: I0320 16:21:13.971097 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" event={"ID":"6ffb462f-06f9-49df-bfe7-d41c274d4b05","Type":"ContainerStarted","Data":"f7754fd399f7dcad362df6e337682fddcbb9031ebb6354f3687e1026101703e4"} Mar 20 16:21:13 crc kubenswrapper[4730]: I0320 16:21:13.992466 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" podStartSLOduration=1.5063335740000001 podStartE2EDuration="1.992445436s" podCreationTimestamp="2026-03-20 16:21:12 +0000 UTC" firstStartedPulling="2026-03-20 16:21:12.951951254 +0000 UTC m=+2532.165322623" lastFinishedPulling="2026-03-20 16:21:13.438063116 +0000 UTC m=+2532.651434485" observedRunningTime="2026-03-20 16:21:13.990762899 +0000 UTC m=+2533.204134268" watchObservedRunningTime="2026-03-20 16:21:13.992445436 +0000 UTC m=+2533.205816805" Mar 20 16:21:25 crc kubenswrapper[4730]: I0320 16:21:25.533572 4730 scope.go:117] "RemoveContainer" containerID="7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213" Mar 20 16:21:25 crc kubenswrapper[4730]: E0320 16:21:25.534185 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:21:39 crc kubenswrapper[4730]: I0320 16:21:39.533638 4730 scope.go:117] "RemoveContainer" containerID="7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213" Mar 20 16:21:39 crc kubenswrapper[4730]: E0320 16:21:39.534582 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:21:51 crc kubenswrapper[4730]: I0320 16:21:51.539988 4730 scope.go:117] "RemoveContainer" containerID="7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213" Mar 20 16:21:51 crc kubenswrapper[4730]: E0320 16:21:51.541011 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:22:00 crc kubenswrapper[4730]: I0320 16:22:00.146604 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567062-58ww8"] Mar 20 16:22:00 crc kubenswrapper[4730]: I0320 16:22:00.149444 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567062-58ww8" Mar 20 16:22:00 crc kubenswrapper[4730]: I0320 16:22:00.151697 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:22:00 crc kubenswrapper[4730]: I0320 16:22:00.155101 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:22:00 crc kubenswrapper[4730]: I0320 16:22:00.158375 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567062-58ww8"] Mar 20 16:22:00 crc kubenswrapper[4730]: I0320 16:22:00.158748 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:22:00 crc kubenswrapper[4730]: I0320 16:22:00.202045 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpccc\" (UniqueName: \"kubernetes.io/projected/e053a518-d8c9-42a1-9c8d-83d5fec8de8c-kube-api-access-fpccc\") pod \"auto-csr-approver-29567062-58ww8\" (UID: \"e053a518-d8c9-42a1-9c8d-83d5fec8de8c\") " pod="openshift-infra/auto-csr-approver-29567062-58ww8" Mar 20 16:22:00 crc kubenswrapper[4730]: I0320 16:22:00.304052 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpccc\" (UniqueName: \"kubernetes.io/projected/e053a518-d8c9-42a1-9c8d-83d5fec8de8c-kube-api-access-fpccc\") pod \"auto-csr-approver-29567062-58ww8\" (UID: \"e053a518-d8c9-42a1-9c8d-83d5fec8de8c\") " pod="openshift-infra/auto-csr-approver-29567062-58ww8" Mar 20 16:22:00 crc kubenswrapper[4730]: I0320 16:22:00.325424 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpccc\" (UniqueName: \"kubernetes.io/projected/e053a518-d8c9-42a1-9c8d-83d5fec8de8c-kube-api-access-fpccc\") pod \"auto-csr-approver-29567062-58ww8\" (UID: \"e053a518-d8c9-42a1-9c8d-83d5fec8de8c\") " pod="openshift-infra/auto-csr-approver-29567062-58ww8" Mar 20 16:22:00 crc kubenswrapper[4730]: I0320 16:22:00.474600 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567062-58ww8" Mar 20 16:22:00 crc kubenswrapper[4730]: I0320 16:22:00.937905 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567062-58ww8"] Mar 20 16:22:01 crc kubenswrapper[4730]: I0320 16:22:01.391798 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567062-58ww8" event={"ID":"e053a518-d8c9-42a1-9c8d-83d5fec8de8c","Type":"ContainerStarted","Data":"786f74eda8c049abd12af2c2ff6876a35e1dc8bbdf0c4f7dda9bf7bb8d5cecc3"} Mar 20 16:22:03 crc kubenswrapper[4730]: I0320 16:22:03.422633 4730 generic.go:334] "Generic (PLEG): container finished" podID="e053a518-d8c9-42a1-9c8d-83d5fec8de8c" containerID="0ccfc2a0baaac1ea53dda2b6020b62a6460f7d7641aebee239ddb879e9c99ccb" exitCode=0 Mar 20 16:22:03 crc kubenswrapper[4730]: I0320 16:22:03.422719 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567062-58ww8" event={"ID":"e053a518-d8c9-42a1-9c8d-83d5fec8de8c","Type":"ContainerDied","Data":"0ccfc2a0baaac1ea53dda2b6020b62a6460f7d7641aebee239ddb879e9c99ccb"} Mar 20 16:22:03 crc kubenswrapper[4730]: I0320 16:22:03.533853 4730 scope.go:117] "RemoveContainer" containerID="7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213" Mar 20 16:22:03 crc kubenswrapper[4730]: E0320 16:22:03.534595 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:22:04 crc kubenswrapper[4730]: I0320 16:22:04.878380 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567062-58ww8" Mar 20 16:22:05 crc kubenswrapper[4730]: I0320 16:22:05.013798 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpccc\" (UniqueName: \"kubernetes.io/projected/e053a518-d8c9-42a1-9c8d-83d5fec8de8c-kube-api-access-fpccc\") pod \"e053a518-d8c9-42a1-9c8d-83d5fec8de8c\" (UID: \"e053a518-d8c9-42a1-9c8d-83d5fec8de8c\") " Mar 20 16:22:05 crc kubenswrapper[4730]: I0320 16:22:05.027888 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e053a518-d8c9-42a1-9c8d-83d5fec8de8c-kube-api-access-fpccc" (OuterVolumeSpecName: "kube-api-access-fpccc") pod "e053a518-d8c9-42a1-9c8d-83d5fec8de8c" (UID: "e053a518-d8c9-42a1-9c8d-83d5fec8de8c"). InnerVolumeSpecName "kube-api-access-fpccc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:22:05 crc kubenswrapper[4730]: I0320 16:22:05.116626 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpccc\" (UniqueName: \"kubernetes.io/projected/e053a518-d8c9-42a1-9c8d-83d5fec8de8c-kube-api-access-fpccc\") on node \"crc\" DevicePath \"\"" Mar 20 16:22:05 crc kubenswrapper[4730]: I0320 16:22:05.440156 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567062-58ww8" event={"ID":"e053a518-d8c9-42a1-9c8d-83d5fec8de8c","Type":"ContainerDied","Data":"786f74eda8c049abd12af2c2ff6876a35e1dc8bbdf0c4f7dda9bf7bb8d5cecc3"} Mar 20 16:22:05 crc kubenswrapper[4730]: I0320 16:22:05.440202 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="786f74eda8c049abd12af2c2ff6876a35e1dc8bbdf0c4f7dda9bf7bb8d5cecc3" Mar 20 16:22:05 crc kubenswrapper[4730]: I0320 16:22:05.440232 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567062-58ww8" Mar 20 16:22:05 crc kubenswrapper[4730]: I0320 16:22:05.951320 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567056-5hbgw"] Mar 20 16:22:05 crc kubenswrapper[4730]: I0320 16:22:05.961043 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567056-5hbgw"] Mar 20 16:22:07 crc kubenswrapper[4730]: I0320 16:22:07.545232 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="331a4cf6-7d2c-4540-9686-064f27fee0cc" path="/var/lib/kubelet/pods/331a4cf6-7d2c-4540-9686-064f27fee0cc/volumes" Mar 20 16:22:15 crc kubenswrapper[4730]: I0320 16:22:15.534155 4730 scope.go:117] "RemoveContainer" containerID="7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213" Mar 20 16:22:15 crc kubenswrapper[4730]: E0320 16:22:15.535137 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:22:27 crc kubenswrapper[4730]: I0320 16:22:27.533758 4730 scope.go:117] "RemoveContainer" containerID="7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213" Mar 20 16:22:27 crc kubenswrapper[4730]: E0320 16:22:27.534538 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:22:41 crc kubenswrapper[4730]: I0320 16:22:41.538894 4730 scope.go:117] "RemoveContainer" containerID="7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213" Mar 20 16:22:41 crc kubenswrapper[4730]: E0320 16:22:41.539739 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:22:49 crc kubenswrapper[4730]: I0320 16:22:49.615204 4730 scope.go:117] "RemoveContainer" containerID="a7f958f4919aca64c652d826ae81971e8c020ebd62d216c39f71ac76d6e91ef4" Mar 20 16:22:52 crc kubenswrapper[4730]: I0320 16:22:52.533291 4730 scope.go:117] "RemoveContainer" containerID="7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213" Mar 20 16:22:52 crc kubenswrapper[4730]: E0320 16:22:52.533811 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:23:06 crc kubenswrapper[4730]: I0320 16:23:06.534625 4730 scope.go:117] "RemoveContainer" containerID="7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213" Mar 20 16:23:06 crc kubenswrapper[4730]: E0320 16:23:06.535838 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:23:17 crc kubenswrapper[4730]: I0320 16:23:17.533882 4730 scope.go:117] "RemoveContainer" containerID="7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213" Mar 20 16:23:17 crc kubenswrapper[4730]: E0320 16:23:17.534705 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:23:29 crc kubenswrapper[4730]: I0320 16:23:29.532927 4730 scope.go:117] "RemoveContainer" containerID="7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213" Mar 20 16:23:29 crc kubenswrapper[4730]: E0320 16:23:29.534846 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:23:36 crc kubenswrapper[4730]: I0320 16:23:36.291848 4730 generic.go:334] "Generic (PLEG): container finished" podID="6ffb462f-06f9-49df-bfe7-d41c274d4b05" containerID="08be8b8dbbd2047f288f6d426869c884bfb40bb5064e00dce56c6a11bd7559c8" exitCode=0 Mar 20 16:23:36 crc kubenswrapper[4730]: I0320 16:23:36.291988 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" event={"ID":"6ffb462f-06f9-49df-bfe7-d41c274d4b05","Type":"ContainerDied","Data":"08be8b8dbbd2047f288f6d426869c884bfb40bb5064e00dce56c6a11bd7559c8"} Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.715665 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.790736 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-extra-config-0\") pod \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.790814 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-migration-ssh-key-1\") pod \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.790895 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkl8b\" (UniqueName: \"kubernetes.io/projected/6ffb462f-06f9-49df-bfe7-d41c274d4b05-kube-api-access-fkl8b\") pod \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.790965 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-ssh-key-openstack-edpm-ipam\") pod \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.791122 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-cell1-compute-config-2\") pod \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.791168 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-migration-ssh-key-0\") pod \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.791190 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-cell1-compute-config-1\") pod \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.791218 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-combined-ca-bundle\") pod \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.791301 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-cell1-compute-config-0\") pod \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.791324 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-cell1-compute-config-3\") pod \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.791382 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-inventory\") pod \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\" (UID: \"6ffb462f-06f9-49df-bfe7-d41c274d4b05\") " Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.797354 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ffb462f-06f9-49df-bfe7-d41c274d4b05-kube-api-access-fkl8b" (OuterVolumeSpecName: "kube-api-access-fkl8b") pod "6ffb462f-06f9-49df-bfe7-d41c274d4b05" (UID: "6ffb462f-06f9-49df-bfe7-d41c274d4b05"). InnerVolumeSpecName "kube-api-access-fkl8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.797463 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "6ffb462f-06f9-49df-bfe7-d41c274d4b05" (UID: "6ffb462f-06f9-49df-bfe7-d41c274d4b05"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.816766 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "6ffb462f-06f9-49df-bfe7-d41c274d4b05" (UID: "6ffb462f-06f9-49df-bfe7-d41c274d4b05"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.822146 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-inventory" (OuterVolumeSpecName: "inventory") pod "6ffb462f-06f9-49df-bfe7-d41c274d4b05" (UID: "6ffb462f-06f9-49df-bfe7-d41c274d4b05"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.823029 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6ffb462f-06f9-49df-bfe7-d41c274d4b05" (UID: "6ffb462f-06f9-49df-bfe7-d41c274d4b05"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.823213 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "6ffb462f-06f9-49df-bfe7-d41c274d4b05" (UID: "6ffb462f-06f9-49df-bfe7-d41c274d4b05"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.824389 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "6ffb462f-06f9-49df-bfe7-d41c274d4b05" (UID: "6ffb462f-06f9-49df-bfe7-d41c274d4b05"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.825475 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "6ffb462f-06f9-49df-bfe7-d41c274d4b05" (UID: "6ffb462f-06f9-49df-bfe7-d41c274d4b05"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.825924 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "6ffb462f-06f9-49df-bfe7-d41c274d4b05" (UID: "6ffb462f-06f9-49df-bfe7-d41c274d4b05"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.827894 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "6ffb462f-06f9-49df-bfe7-d41c274d4b05" (UID: "6ffb462f-06f9-49df-bfe7-d41c274d4b05"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.831635 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "6ffb462f-06f9-49df-bfe7-d41c274d4b05" (UID: "6ffb462f-06f9-49df-bfe7-d41c274d4b05"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.894797 4730 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-inventory\") on node \"crc\" DevicePath \"\"" Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.894837 4730 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.894847 4730 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.894856 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkl8b\" (UniqueName: \"kubernetes.io/projected/6ffb462f-06f9-49df-bfe7-d41c274d4b05-kube-api-access-fkl8b\") on node \"crc\" DevicePath \"\"" Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.894866 4730 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.894875 4730 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.894891 4730 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.894899 4730 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.894907 4730 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.894917 4730 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Mar 20 16:23:37 crc kubenswrapper[4730]: I0320 16:23:37.894933 4730 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/6ffb462f-06f9-49df-bfe7-d41c274d4b05-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.309214 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" event={"ID":"6ffb462f-06f9-49df-bfe7-d41c274d4b05","Type":"ContainerDied","Data":"f7754fd399f7dcad362df6e337682fddcbb9031ebb6354f3687e1026101703e4"} Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.309279 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4t58" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.309301 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7754fd399f7dcad362df6e337682fddcbb9031ebb6354f3687e1026101703e4" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.425486 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq"] Mar 20 16:23:38 crc kubenswrapper[4730]: E0320 16:23:38.425876 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e053a518-d8c9-42a1-9c8d-83d5fec8de8c" containerName="oc" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.425888 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e053a518-d8c9-42a1-9c8d-83d5fec8de8c" containerName="oc" Mar 20 16:23:38 crc kubenswrapper[4730]: E0320 16:23:38.425920 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ffb462f-06f9-49df-bfe7-d41c274d4b05" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.425928 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ffb462f-06f9-49df-bfe7-d41c274d4b05" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.426095 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ffb462f-06f9-49df-bfe7-d41c274d4b05" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.426122 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="e053a518-d8c9-42a1-9c8d-83d5fec8de8c" containerName="oc" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.426829 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.430345 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.430561 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.431367 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.431664 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.431750 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vvsxx" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.444872 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq"] Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.505463 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq\" (UID: \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.505774 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq\" (UID: \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.505966 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq\" (UID: \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.506076 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq\" (UID: \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.506237 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vccl6\" (UniqueName: \"kubernetes.io/projected/884c2fa6-babb-44b8-b8e2-3e4fbce27153-kube-api-access-vccl6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq\" (UID: \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.506425 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq\" (UID: \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.506573 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq\" (UID: \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.608656 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq\" (UID: \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.608733 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq\" (UID: \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.608771 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq\" (UID: \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.608832 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vccl6\" (UniqueName: \"kubernetes.io/projected/884c2fa6-babb-44b8-b8e2-3e4fbce27153-kube-api-access-vccl6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq\" (UID: \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.608895 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq\" (UID: \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.608949 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq\" (UID: \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.609070 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq\" (UID: \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.612328 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq\" (UID: \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.612680 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq\" (UID: \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.613448 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq\" (UID: \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.614158 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq\" (UID: \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.614747 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq\" (UID: \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.615483 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq\" (UID: \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.624824 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vccl6\" (UniqueName: \"kubernetes.io/projected/884c2fa6-babb-44b8-b8e2-3e4fbce27153-kube-api-access-vccl6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq\" (UID: \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" Mar 20 16:23:38 crc kubenswrapper[4730]: I0320 16:23:38.744105 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" Mar 20 16:23:39 crc kubenswrapper[4730]: I0320 16:23:39.290932 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq"] Mar 20 16:23:39 crc kubenswrapper[4730]: I0320 16:23:39.319362 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" event={"ID":"884c2fa6-babb-44b8-b8e2-3e4fbce27153","Type":"ContainerStarted","Data":"7ab2683493c806bbc0a72a38ad816fd90029738a822e7670b5034e10e4d79b42"} Mar 20 16:23:40 crc kubenswrapper[4730]: I0320 16:23:40.334362 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" event={"ID":"884c2fa6-babb-44b8-b8e2-3e4fbce27153","Type":"ContainerStarted","Data":"4f23a2d6263e179fbb42b831d0fa59c0e1411d46b87becc3c39bd7883db89197"} Mar 20 16:23:41 crc kubenswrapper[4730]: I0320 16:23:40.372727 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" podStartSLOduration=1.936028289 podStartE2EDuration="2.372702113s" podCreationTimestamp="2026-03-20 16:23:38 +0000 UTC" firstStartedPulling="2026-03-20 16:23:39.299738891 +0000 UTC m=+2678.513110260" lastFinishedPulling="2026-03-20 16:23:39.736412705 +0000 UTC m=+2678.949784084" observedRunningTime="2026-03-20 16:23:40.362636597 +0000 UTC m=+2679.576007986" watchObservedRunningTime="2026-03-20 16:23:40.372702113 +0000 UTC m=+2679.586073482" Mar 20 16:23:44 crc kubenswrapper[4730]: I0320 16:23:44.532692 4730 scope.go:117] "RemoveContainer" containerID="7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213" Mar 20 16:23:44 crc kubenswrapper[4730]: E0320 16:23:44.533442 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:23:59 crc kubenswrapper[4730]: I0320 16:23:59.532925 4730 scope.go:117] "RemoveContainer" containerID="7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213" Mar 20 16:23:59 crc kubenswrapper[4730]: E0320 16:23:59.533911 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:24:00 crc kubenswrapper[4730]: I0320 16:24:00.162519 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567064-sklm5"] Mar 20 16:24:00 crc kubenswrapper[4730]: I0320 16:24:00.164516 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567064-sklm5" Mar 20 16:24:00 crc kubenswrapper[4730]: I0320 16:24:00.167363 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:24:00 crc kubenswrapper[4730]: I0320 16:24:00.167620 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:24:00 crc kubenswrapper[4730]: I0320 16:24:00.167809 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:24:00 crc kubenswrapper[4730]: I0320 16:24:00.170834 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567064-sklm5"] Mar 20 16:24:00 crc kubenswrapper[4730]: I0320 16:24:00.244620 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g8xj\" (UniqueName: \"kubernetes.io/projected/a36e403d-410f-40cc-8441-66c444837d24-kube-api-access-4g8xj\") pod \"auto-csr-approver-29567064-sklm5\" (UID: \"a36e403d-410f-40cc-8441-66c444837d24\") " pod="openshift-infra/auto-csr-approver-29567064-sklm5" Mar 20 16:24:00 crc kubenswrapper[4730]: I0320 16:24:00.345989 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g8xj\" (UniqueName: \"kubernetes.io/projected/a36e403d-410f-40cc-8441-66c444837d24-kube-api-access-4g8xj\") pod \"auto-csr-approver-29567064-sklm5\" (UID: \"a36e403d-410f-40cc-8441-66c444837d24\") " pod="openshift-infra/auto-csr-approver-29567064-sklm5" Mar 20 16:24:00 crc kubenswrapper[4730]: I0320 16:24:00.365316 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g8xj\" (UniqueName: \"kubernetes.io/projected/a36e403d-410f-40cc-8441-66c444837d24-kube-api-access-4g8xj\") pod \"auto-csr-approver-29567064-sklm5\" (UID: \"a36e403d-410f-40cc-8441-66c444837d24\") " pod="openshift-infra/auto-csr-approver-29567064-sklm5" Mar 20 16:24:00 crc kubenswrapper[4730]: I0320 16:24:00.484024 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567064-sklm5" Mar 20 16:24:00 crc kubenswrapper[4730]: I0320 16:24:00.955699 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567064-sklm5"] Mar 20 16:24:00 crc kubenswrapper[4730]: W0320 16:24:00.959080 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda36e403d_410f_40cc_8441_66c444837d24.slice/crio-6e6dae407208c0df8a73bdfd1274fe27ac4f9dd3cff0bfcf4c1f538cba05d871 WatchSource:0}: Error finding container 6e6dae407208c0df8a73bdfd1274fe27ac4f9dd3cff0bfcf4c1f538cba05d871: Status 404 returned error can't find the container with id 6e6dae407208c0df8a73bdfd1274fe27ac4f9dd3cff0bfcf4c1f538cba05d871 Mar 20 16:24:01 crc kubenswrapper[4730]: I0320 16:24:01.548131 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567064-sklm5" event={"ID":"a36e403d-410f-40cc-8441-66c444837d24","Type":"ContainerStarted","Data":"6e6dae407208c0df8a73bdfd1274fe27ac4f9dd3cff0bfcf4c1f538cba05d871"} Mar 20 16:24:02 crc kubenswrapper[4730]: I0320 16:24:02.547482 4730 generic.go:334] "Generic (PLEG): container finished" podID="a36e403d-410f-40cc-8441-66c444837d24" containerID="1718e89ccd737bfb9a3619c68d04fdd94aa68dd80d2bc675347f69c2cc40fd04" exitCode=0 Mar 20 16:24:02 crc kubenswrapper[4730]: I0320 16:24:02.547540 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567064-sklm5" event={"ID":"a36e403d-410f-40cc-8441-66c444837d24","Type":"ContainerDied","Data":"1718e89ccd737bfb9a3619c68d04fdd94aa68dd80d2bc675347f69c2cc40fd04"} Mar 20 16:24:03 crc kubenswrapper[4730]: I0320 16:24:03.906448 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567064-sklm5" Mar 20 16:24:03 crc kubenswrapper[4730]: I0320 16:24:03.929775 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8xj\" (UniqueName: \"kubernetes.io/projected/a36e403d-410f-40cc-8441-66c444837d24-kube-api-access-4g8xj\") pod \"a36e403d-410f-40cc-8441-66c444837d24\" (UID: \"a36e403d-410f-40cc-8441-66c444837d24\") " Mar 20 16:24:03 crc kubenswrapper[4730]: I0320 16:24:03.938585 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a36e403d-410f-40cc-8441-66c444837d24-kube-api-access-4g8xj" (OuterVolumeSpecName: "kube-api-access-4g8xj") pod "a36e403d-410f-40cc-8441-66c444837d24" (UID: "a36e403d-410f-40cc-8441-66c444837d24"). InnerVolumeSpecName "kube-api-access-4g8xj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:24:04 crc kubenswrapper[4730]: I0320 16:24:04.031209 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g8xj\" (UniqueName: \"kubernetes.io/projected/a36e403d-410f-40cc-8441-66c444837d24-kube-api-access-4g8xj\") on node \"crc\" DevicePath \"\"" Mar 20 16:24:04 crc kubenswrapper[4730]: I0320 16:24:04.573705 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567064-sklm5" event={"ID":"a36e403d-410f-40cc-8441-66c444837d24","Type":"ContainerDied","Data":"6e6dae407208c0df8a73bdfd1274fe27ac4f9dd3cff0bfcf4c1f538cba05d871"} Mar 20 16:24:04 crc kubenswrapper[4730]: I0320 16:24:04.573752 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e6dae407208c0df8a73bdfd1274fe27ac4f9dd3cff0bfcf4c1f538cba05d871" Mar 20 16:24:04 crc kubenswrapper[4730]: I0320 16:24:04.573825 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567064-sklm5" Mar 20 16:24:04 crc kubenswrapper[4730]: I0320 16:24:04.977610 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567058-dbb42"] Mar 20 16:24:04 crc kubenswrapper[4730]: I0320 16:24:04.988343 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567058-dbb42"] Mar 20 16:24:05 crc kubenswrapper[4730]: I0320 16:24:05.547033 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e26c2d2-d860-4e2a-b8e9-d607220b44f7" path="/var/lib/kubelet/pods/3e26c2d2-d860-4e2a-b8e9-d607220b44f7/volumes" Mar 20 16:24:13 crc kubenswrapper[4730]: I0320 16:24:13.533747 4730 scope.go:117] "RemoveContainer" containerID="7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213" Mar 20 16:24:13 crc kubenswrapper[4730]: E0320 16:24:13.534528 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:24:27 crc kubenswrapper[4730]: I0320 16:24:27.533843 4730 scope.go:117] "RemoveContainer" containerID="7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213" Mar 20 16:24:27 crc kubenswrapper[4730]: E0320 16:24:27.534730 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:24:37 crc kubenswrapper[4730]: I0320 16:24:37.091492 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gkg85"] Mar 20 16:24:37 crc kubenswrapper[4730]: E0320 16:24:37.092607 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a36e403d-410f-40cc-8441-66c444837d24" containerName="oc" Mar 20 16:24:37 crc kubenswrapper[4730]: I0320 16:24:37.092627 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="a36e403d-410f-40cc-8441-66c444837d24" containerName="oc" Mar 20 16:24:37 crc kubenswrapper[4730]: I0320 16:24:37.092881 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="a36e403d-410f-40cc-8441-66c444837d24" containerName="oc" Mar 20 16:24:37 crc kubenswrapper[4730]: I0320 16:24:37.094675 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gkg85" Mar 20 16:24:37 crc kubenswrapper[4730]: I0320 16:24:37.101406 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gkg85"] Mar 20 16:24:37 crc kubenswrapper[4730]: I0320 16:24:37.246882 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcclj\" (UniqueName: \"kubernetes.io/projected/99180a72-d38c-44e4-b866-691567a72781-kube-api-access-fcclj\") pod \"certified-operators-gkg85\" (UID: \"99180a72-d38c-44e4-b866-691567a72781\") " pod="openshift-marketplace/certified-operators-gkg85" Mar 20 16:24:37 crc kubenswrapper[4730]: I0320 16:24:37.247312 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99180a72-d38c-44e4-b866-691567a72781-utilities\") pod \"certified-operators-gkg85\" (UID: \"99180a72-d38c-44e4-b866-691567a72781\") " pod="openshift-marketplace/certified-operators-gkg85" Mar 20 16:24:37 crc kubenswrapper[4730]: I0320 16:24:37.247390 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99180a72-d38c-44e4-b866-691567a72781-catalog-content\") pod \"certified-operators-gkg85\" (UID: \"99180a72-d38c-44e4-b866-691567a72781\") " pod="openshift-marketplace/certified-operators-gkg85" Mar 20 16:24:37 crc kubenswrapper[4730]: I0320 16:24:37.349000 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99180a72-d38c-44e4-b866-691567a72781-utilities\") pod \"certified-operators-gkg85\" (UID: \"99180a72-d38c-44e4-b866-691567a72781\") " pod="openshift-marketplace/certified-operators-gkg85" Mar 20 16:24:37 crc kubenswrapper[4730]: I0320 16:24:37.349044 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99180a72-d38c-44e4-b866-691567a72781-catalog-content\") pod \"certified-operators-gkg85\" (UID: \"99180a72-d38c-44e4-b866-691567a72781\") " pod="openshift-marketplace/certified-operators-gkg85" Mar 20 16:24:37 crc kubenswrapper[4730]: I0320 16:24:37.349124 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcclj\" (UniqueName: \"kubernetes.io/projected/99180a72-d38c-44e4-b866-691567a72781-kube-api-access-fcclj\") pod \"certified-operators-gkg85\" (UID: \"99180a72-d38c-44e4-b866-691567a72781\") " pod="openshift-marketplace/certified-operators-gkg85" Mar 20 16:24:37 crc kubenswrapper[4730]: I0320 16:24:37.349580 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99180a72-d38c-44e4-b866-691567a72781-utilities\") pod \"certified-operators-gkg85\" (UID: \"99180a72-d38c-44e4-b866-691567a72781\") " pod="openshift-marketplace/certified-operators-gkg85" Mar 20 16:24:37 crc kubenswrapper[4730]: I0320 16:24:37.349698 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99180a72-d38c-44e4-b866-691567a72781-catalog-content\") pod \"certified-operators-gkg85\" (UID: \"99180a72-d38c-44e4-b866-691567a72781\") " pod="openshift-marketplace/certified-operators-gkg85" Mar 20 16:24:37 crc kubenswrapper[4730]: I0320 16:24:37.369819 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcclj\" (UniqueName: \"kubernetes.io/projected/99180a72-d38c-44e4-b866-691567a72781-kube-api-access-fcclj\") pod \"certified-operators-gkg85\" (UID: \"99180a72-d38c-44e4-b866-691567a72781\") " pod="openshift-marketplace/certified-operators-gkg85" Mar 20 16:24:37 crc kubenswrapper[4730]: I0320 16:24:37.429502 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gkg85" Mar 20 16:24:37 crc kubenswrapper[4730]: I0320 16:24:37.970211 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gkg85"] Mar 20 16:24:38 crc kubenswrapper[4730]: I0320 16:24:38.918719 4730 generic.go:334] "Generic (PLEG): container finished" podID="99180a72-d38c-44e4-b866-691567a72781" containerID="b54e146d59f285cc301fc0bd7589d1ed01f280f8af399226c6b1444fcde6d455" exitCode=0 Mar 20 16:24:38 crc kubenswrapper[4730]: I0320 16:24:38.918771 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkg85" event={"ID":"99180a72-d38c-44e4-b866-691567a72781","Type":"ContainerDied","Data":"b54e146d59f285cc301fc0bd7589d1ed01f280f8af399226c6b1444fcde6d455"} Mar 20 16:24:38 crc kubenswrapper[4730]: I0320 16:24:38.918803 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkg85" event={"ID":"99180a72-d38c-44e4-b866-691567a72781","Type":"ContainerStarted","Data":"3fa6e12c75e331bc0af19554791977b74f6d4895aba6bdbc02bb29ddc8d7e1db"} Mar 20 16:24:39 crc kubenswrapper[4730]: I0320 16:24:39.930276 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkg85" event={"ID":"99180a72-d38c-44e4-b866-691567a72781","Type":"ContainerStarted","Data":"8a4e7522fb16182e51a6774ba98314d01ce0904c8133a56833e9dfe6f34f0ff5"} Mar 20 16:24:40 crc kubenswrapper[4730]: I0320 16:24:40.533601 4730 scope.go:117] "RemoveContainer" containerID="7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213" Mar 20 16:24:40 crc kubenswrapper[4730]: E0320 16:24:40.534174 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:24:40 crc kubenswrapper[4730]: I0320 16:24:40.945620 4730 generic.go:334] "Generic (PLEG): container finished" podID="99180a72-d38c-44e4-b866-691567a72781" containerID="8a4e7522fb16182e51a6774ba98314d01ce0904c8133a56833e9dfe6f34f0ff5" exitCode=0 Mar 20 16:24:40 crc kubenswrapper[4730]: I0320 16:24:40.945684 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkg85" event={"ID":"99180a72-d38c-44e4-b866-691567a72781","Type":"ContainerDied","Data":"8a4e7522fb16182e51a6774ba98314d01ce0904c8133a56833e9dfe6f34f0ff5"} Mar 20 16:24:41 crc kubenswrapper[4730]: I0320 16:24:41.957347 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkg85" event={"ID":"99180a72-d38c-44e4-b866-691567a72781","Type":"ContainerStarted","Data":"f4829eb2f86a7c76f3956b6b64ce5ba191028ecce966c5159dad68531c1ae504"} Mar 20 16:24:41 crc kubenswrapper[4730]: I0320 16:24:41.974079 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gkg85" podStartSLOduration=2.511380308 podStartE2EDuration="4.974057095s" podCreationTimestamp="2026-03-20 16:24:37 +0000 UTC" firstStartedPulling="2026-03-20 16:24:38.920514651 +0000 UTC m=+2738.133886020" lastFinishedPulling="2026-03-20 16:24:41.383191418 +0000 UTC m=+2740.596562807" observedRunningTime="2026-03-20 16:24:41.972926903 +0000 UTC m=+2741.186298272" watchObservedRunningTime="2026-03-20 16:24:41.974057095 +0000 UTC m=+2741.187428464" Mar 20 16:24:47 crc kubenswrapper[4730]: I0320 16:24:47.429801 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gkg85" Mar 20 16:24:47 crc kubenswrapper[4730]: I0320 16:24:47.430086 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gkg85" Mar 20 16:24:47 crc kubenswrapper[4730]: I0320 16:24:47.474234 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gkg85" Mar 20 16:24:48 crc kubenswrapper[4730]: I0320 16:24:48.067322 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gkg85" Mar 20 16:24:48 crc kubenswrapper[4730]: I0320 16:24:48.243501 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gkg85"] Mar 20 16:24:49 crc kubenswrapper[4730]: I0320 16:24:49.710390 4730 scope.go:117] "RemoveContainer" containerID="7c5e3ddc85e2a694aa4b6f5091893d6172a7cc9cc61ecbf5665992a51d9cf392" Mar 20 16:24:50 crc kubenswrapper[4730]: I0320 16:24:50.022047 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gkg85" podUID="99180a72-d38c-44e4-b866-691567a72781" containerName="registry-server" containerID="cri-o://f4829eb2f86a7c76f3956b6b64ce5ba191028ecce966c5159dad68531c1ae504" gracePeriod=2 Mar 20 16:24:50 crc kubenswrapper[4730]: I0320 16:24:50.454444 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gkg85" Mar 20 16:24:50 crc kubenswrapper[4730]: I0320 16:24:50.600156 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcclj\" (UniqueName: \"kubernetes.io/projected/99180a72-d38c-44e4-b866-691567a72781-kube-api-access-fcclj\") pod \"99180a72-d38c-44e4-b866-691567a72781\" (UID: \"99180a72-d38c-44e4-b866-691567a72781\") " Mar 20 16:24:50 crc kubenswrapper[4730]: I0320 16:24:50.600234 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99180a72-d38c-44e4-b866-691567a72781-utilities\") pod \"99180a72-d38c-44e4-b866-691567a72781\" (UID: \"99180a72-d38c-44e4-b866-691567a72781\") " Mar 20 16:24:50 crc kubenswrapper[4730]: I0320 16:24:50.600292 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99180a72-d38c-44e4-b866-691567a72781-catalog-content\") pod \"99180a72-d38c-44e4-b866-691567a72781\" (UID: \"99180a72-d38c-44e4-b866-691567a72781\") " Mar 20 16:24:50 crc kubenswrapper[4730]: I0320 16:24:50.601264 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99180a72-d38c-44e4-b866-691567a72781-utilities" (OuterVolumeSpecName: "utilities") pod "99180a72-d38c-44e4-b866-691567a72781" (UID: "99180a72-d38c-44e4-b866-691567a72781"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:24:50 crc kubenswrapper[4730]: I0320 16:24:50.610440 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99180a72-d38c-44e4-b866-691567a72781-kube-api-access-fcclj" (OuterVolumeSpecName: "kube-api-access-fcclj") pod "99180a72-d38c-44e4-b866-691567a72781" (UID: "99180a72-d38c-44e4-b866-691567a72781"). InnerVolumeSpecName "kube-api-access-fcclj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:24:50 crc kubenswrapper[4730]: I0320 16:24:50.704409 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcclj\" (UniqueName: \"kubernetes.io/projected/99180a72-d38c-44e4-b866-691567a72781-kube-api-access-fcclj\") on node \"crc\" DevicePath \"\"" Mar 20 16:24:50 crc kubenswrapper[4730]: I0320 16:24:50.704555 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99180a72-d38c-44e4-b866-691567a72781-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 16:24:51 crc kubenswrapper[4730]: I0320 16:24:51.036646 4730 generic.go:334] "Generic (PLEG): container finished" podID="99180a72-d38c-44e4-b866-691567a72781" containerID="f4829eb2f86a7c76f3956b6b64ce5ba191028ecce966c5159dad68531c1ae504" exitCode=0 Mar 20 16:24:51 crc kubenswrapper[4730]: I0320 16:24:51.036726 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gkg85" Mar 20 16:24:51 crc kubenswrapper[4730]: I0320 16:24:51.036709 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkg85" event={"ID":"99180a72-d38c-44e4-b866-691567a72781","Type":"ContainerDied","Data":"f4829eb2f86a7c76f3956b6b64ce5ba191028ecce966c5159dad68531c1ae504"} Mar 20 16:24:51 crc kubenswrapper[4730]: I0320 16:24:51.036797 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkg85" event={"ID":"99180a72-d38c-44e4-b866-691567a72781","Type":"ContainerDied","Data":"3fa6e12c75e331bc0af19554791977b74f6d4895aba6bdbc02bb29ddc8d7e1db"} Mar 20 16:24:51 crc kubenswrapper[4730]: I0320 16:24:51.036821 4730 scope.go:117] "RemoveContainer" containerID="f4829eb2f86a7c76f3956b6b64ce5ba191028ecce966c5159dad68531c1ae504" Mar 20 16:24:51 crc kubenswrapper[4730]: I0320 16:24:51.064518 4730 scope.go:117] "RemoveContainer" containerID="8a4e7522fb16182e51a6774ba98314d01ce0904c8133a56833e9dfe6f34f0ff5" Mar 20 16:24:51 crc kubenswrapper[4730]: I0320 16:24:51.102407 4730 scope.go:117] "RemoveContainer" containerID="b54e146d59f285cc301fc0bd7589d1ed01f280f8af399226c6b1444fcde6d455" Mar 20 16:24:51 crc kubenswrapper[4730]: I0320 16:24:51.160633 4730 scope.go:117] "RemoveContainer" containerID="f4829eb2f86a7c76f3956b6b64ce5ba191028ecce966c5159dad68531c1ae504" Mar 20 16:24:51 crc kubenswrapper[4730]: E0320 16:24:51.161716 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4829eb2f86a7c76f3956b6b64ce5ba191028ecce966c5159dad68531c1ae504\": container with ID starting with f4829eb2f86a7c76f3956b6b64ce5ba191028ecce966c5159dad68531c1ae504 not found: ID does not exist" containerID="f4829eb2f86a7c76f3956b6b64ce5ba191028ecce966c5159dad68531c1ae504" Mar 20 16:24:51 crc kubenswrapper[4730]: I0320 16:24:51.161754 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4829eb2f86a7c76f3956b6b64ce5ba191028ecce966c5159dad68531c1ae504"} err="failed to get container status \"f4829eb2f86a7c76f3956b6b64ce5ba191028ecce966c5159dad68531c1ae504\": rpc error: code = NotFound desc = could not find container \"f4829eb2f86a7c76f3956b6b64ce5ba191028ecce966c5159dad68531c1ae504\": container with ID starting with f4829eb2f86a7c76f3956b6b64ce5ba191028ecce966c5159dad68531c1ae504 not found: ID does not exist" Mar 20 16:24:51 crc kubenswrapper[4730]: I0320 16:24:51.161781 4730 scope.go:117] "RemoveContainer" containerID="8a4e7522fb16182e51a6774ba98314d01ce0904c8133a56833e9dfe6f34f0ff5" Mar 20 16:24:51 crc kubenswrapper[4730]: E0320 16:24:51.162849 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a4e7522fb16182e51a6774ba98314d01ce0904c8133a56833e9dfe6f34f0ff5\": container with ID starting with 8a4e7522fb16182e51a6774ba98314d01ce0904c8133a56833e9dfe6f34f0ff5 not found: ID does not exist" containerID="8a4e7522fb16182e51a6774ba98314d01ce0904c8133a56833e9dfe6f34f0ff5" Mar 20 16:24:51 crc kubenswrapper[4730]: I0320 16:24:51.162877 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a4e7522fb16182e51a6774ba98314d01ce0904c8133a56833e9dfe6f34f0ff5"} err="failed to get container status \"8a4e7522fb16182e51a6774ba98314d01ce0904c8133a56833e9dfe6f34f0ff5\": rpc error: code = NotFound desc = could not find container \"8a4e7522fb16182e51a6774ba98314d01ce0904c8133a56833e9dfe6f34f0ff5\": container with ID starting with 8a4e7522fb16182e51a6774ba98314d01ce0904c8133a56833e9dfe6f34f0ff5 not found: ID does not exist" Mar 20 16:24:51 crc kubenswrapper[4730]: I0320 16:24:51.162896 4730 scope.go:117] "RemoveContainer" containerID="b54e146d59f285cc301fc0bd7589d1ed01f280f8af399226c6b1444fcde6d455" Mar 20 16:24:51 crc kubenswrapper[4730]: E0320 16:24:51.163418 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b54e146d59f285cc301fc0bd7589d1ed01f280f8af399226c6b1444fcde6d455\": container with ID starting with b54e146d59f285cc301fc0bd7589d1ed01f280f8af399226c6b1444fcde6d455 not found: ID does not exist" containerID="b54e146d59f285cc301fc0bd7589d1ed01f280f8af399226c6b1444fcde6d455" Mar 20 16:24:51 crc kubenswrapper[4730]: I0320 16:24:51.163450 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b54e146d59f285cc301fc0bd7589d1ed01f280f8af399226c6b1444fcde6d455"} err="failed to get container status \"b54e146d59f285cc301fc0bd7589d1ed01f280f8af399226c6b1444fcde6d455\": rpc error: code = NotFound desc = could not find container \"b54e146d59f285cc301fc0bd7589d1ed01f280f8af399226c6b1444fcde6d455\": container with ID starting with b54e146d59f285cc301fc0bd7589d1ed01f280f8af399226c6b1444fcde6d455 not found: ID does not exist" Mar 20 16:24:51 crc kubenswrapper[4730]: I0320 16:24:51.167219 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99180a72-d38c-44e4-b866-691567a72781-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99180a72-d38c-44e4-b866-691567a72781" (UID: "99180a72-d38c-44e4-b866-691567a72781"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:24:51 crc kubenswrapper[4730]: I0320 16:24:51.214704 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99180a72-d38c-44e4-b866-691567a72781-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 16:24:51 crc kubenswrapper[4730]: I0320 16:24:51.370646 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gkg85"] Mar 20 16:24:51 crc kubenswrapper[4730]: I0320 16:24:51.380111 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gkg85"] Mar 20 16:24:51 crc kubenswrapper[4730]: I0320 16:24:51.543993 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99180a72-d38c-44e4-b866-691567a72781" path="/var/lib/kubelet/pods/99180a72-d38c-44e4-b866-691567a72781/volumes" Mar 20 16:24:53 crc kubenswrapper[4730]: I0320 16:24:53.167472 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7jb45"] Mar 20 16:24:53 crc kubenswrapper[4730]: E0320 16:24:53.168061 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99180a72-d38c-44e4-b866-691567a72781" containerName="extract-utilities" Mar 20 16:24:53 crc kubenswrapper[4730]: I0320 16:24:53.168079 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="99180a72-d38c-44e4-b866-691567a72781" containerName="extract-utilities" Mar 20 16:24:53 crc kubenswrapper[4730]: E0320 16:24:53.168089 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99180a72-d38c-44e4-b866-691567a72781" containerName="extract-content" Mar 20 16:24:53 crc kubenswrapper[4730]: I0320 16:24:53.168095 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="99180a72-d38c-44e4-b866-691567a72781" containerName="extract-content" Mar 20 16:24:53 crc kubenswrapper[4730]: E0320 16:24:53.168109 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99180a72-d38c-44e4-b866-691567a72781" containerName="registry-server" Mar 20 16:24:53 crc kubenswrapper[4730]: I0320 16:24:53.168116 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="99180a72-d38c-44e4-b866-691567a72781" containerName="registry-server" Mar 20 16:24:53 crc kubenswrapper[4730]: I0320 16:24:53.168332 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="99180a72-d38c-44e4-b866-691567a72781" containerName="registry-server" Mar 20 16:24:53 crc kubenswrapper[4730]: I0320 16:24:53.169949 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7jb45" Mar 20 16:24:53 crc kubenswrapper[4730]: I0320 16:24:53.182310 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7jb45"] Mar 20 16:24:53 crc kubenswrapper[4730]: I0320 16:24:53.356299 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74b6q\" (UniqueName: \"kubernetes.io/projected/a235cc35-7c77-483e-ac9b-966218dcd9a8-kube-api-access-74b6q\") pod \"community-operators-7jb45\" (UID: \"a235cc35-7c77-483e-ac9b-966218dcd9a8\") " pod="openshift-marketplace/community-operators-7jb45" Mar 20 16:24:53 crc kubenswrapper[4730]: I0320 16:24:53.356645 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a235cc35-7c77-483e-ac9b-966218dcd9a8-utilities\") pod \"community-operators-7jb45\" (UID: \"a235cc35-7c77-483e-ac9b-966218dcd9a8\") " pod="openshift-marketplace/community-operators-7jb45" Mar 20 16:24:53 crc kubenswrapper[4730]: I0320 16:24:53.356957 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a235cc35-7c77-483e-ac9b-966218dcd9a8-catalog-content\") pod \"community-operators-7jb45\" (UID: \"a235cc35-7c77-483e-ac9b-966218dcd9a8\") " pod="openshift-marketplace/community-operators-7jb45" Mar 20 16:24:53 crc kubenswrapper[4730]: I0320 16:24:53.458334 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a235cc35-7c77-483e-ac9b-966218dcd9a8-utilities\") pod \"community-operators-7jb45\" (UID: \"a235cc35-7c77-483e-ac9b-966218dcd9a8\") " pod="openshift-marketplace/community-operators-7jb45" Mar 20 16:24:53 crc kubenswrapper[4730]: I0320 16:24:53.458412 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a235cc35-7c77-483e-ac9b-966218dcd9a8-catalog-content\") pod \"community-operators-7jb45\" (UID: \"a235cc35-7c77-483e-ac9b-966218dcd9a8\") " pod="openshift-marketplace/community-operators-7jb45" Mar 20 16:24:53 crc kubenswrapper[4730]: I0320 16:24:53.458471 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74b6q\" (UniqueName: \"kubernetes.io/projected/a235cc35-7c77-483e-ac9b-966218dcd9a8-kube-api-access-74b6q\") pod \"community-operators-7jb45\" (UID: \"a235cc35-7c77-483e-ac9b-966218dcd9a8\") " pod="openshift-marketplace/community-operators-7jb45" Mar 20 16:24:53 crc kubenswrapper[4730]: I0320 16:24:53.458913 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a235cc35-7c77-483e-ac9b-966218dcd9a8-utilities\") pod \"community-operators-7jb45\" (UID: \"a235cc35-7c77-483e-ac9b-966218dcd9a8\") " pod="openshift-marketplace/community-operators-7jb45" Mar 20 16:24:53 crc kubenswrapper[4730]: I0320 16:24:53.459123 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a235cc35-7c77-483e-ac9b-966218dcd9a8-catalog-content\") pod \"community-operators-7jb45\" (UID: \"a235cc35-7c77-483e-ac9b-966218dcd9a8\") " pod="openshift-marketplace/community-operators-7jb45" Mar 20 16:24:53 crc kubenswrapper[4730]: I0320 16:24:53.491009 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74b6q\" (UniqueName: \"kubernetes.io/projected/a235cc35-7c77-483e-ac9b-966218dcd9a8-kube-api-access-74b6q\") pod \"community-operators-7jb45\" (UID: \"a235cc35-7c77-483e-ac9b-966218dcd9a8\") " pod="openshift-marketplace/community-operators-7jb45" Mar 20 16:24:53 crc kubenswrapper[4730]: I0320 16:24:53.533521 4730 scope.go:117] "RemoveContainer" containerID="7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213" Mar 20 16:24:53 crc kubenswrapper[4730]: E0320 16:24:53.533814 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:24:53 crc kubenswrapper[4730]: I0320 16:24:53.790718 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7jb45" Mar 20 16:24:54 crc kubenswrapper[4730]: I0320 16:24:54.229140 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7jb45"] Mar 20 16:24:54 crc kubenswrapper[4730]: W0320 16:24:54.236172 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda235cc35_7c77_483e_ac9b_966218dcd9a8.slice/crio-d42cb277b6d68c8617f1da3c597f77865b9d81575ad6f2fa8dd8530cc4a65717 WatchSource:0}: Error finding container d42cb277b6d68c8617f1da3c597f77865b9d81575ad6f2fa8dd8530cc4a65717: Status 404 returned error can't find the container with id d42cb277b6d68c8617f1da3c597f77865b9d81575ad6f2fa8dd8530cc4a65717 Mar 20 16:24:55 crc kubenswrapper[4730]: I0320 16:24:55.084411 4730 generic.go:334] "Generic (PLEG): container finished" podID="a235cc35-7c77-483e-ac9b-966218dcd9a8" containerID="2fc031312adb328146003fdca6895b2fc848dfb296665101401a3ee75c2bee08" exitCode=0 Mar 20 16:24:55 crc kubenswrapper[4730]: I0320 16:24:55.084481 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jb45" event={"ID":"a235cc35-7c77-483e-ac9b-966218dcd9a8","Type":"ContainerDied","Data":"2fc031312adb328146003fdca6895b2fc848dfb296665101401a3ee75c2bee08"} Mar 20 16:24:55 crc kubenswrapper[4730]: I0320 16:24:55.084807 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jb45" event={"ID":"a235cc35-7c77-483e-ac9b-966218dcd9a8","Type":"ContainerStarted","Data":"d42cb277b6d68c8617f1da3c597f77865b9d81575ad6f2fa8dd8530cc4a65717"} Mar 20 16:24:57 crc kubenswrapper[4730]: I0320 16:24:57.112322 4730 generic.go:334] "Generic (PLEG): container finished" podID="a235cc35-7c77-483e-ac9b-966218dcd9a8" containerID="d99d57bb87e0f3a1b3b5043e9291a7c3735d1bf0b933793d1e2984f582770d47" exitCode=0 Mar 20 16:24:57 crc kubenswrapper[4730]: I0320 16:24:57.112383 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jb45" event={"ID":"a235cc35-7c77-483e-ac9b-966218dcd9a8","Type":"ContainerDied","Data":"d99d57bb87e0f3a1b3b5043e9291a7c3735d1bf0b933793d1e2984f582770d47"} Mar 20 16:24:58 crc kubenswrapper[4730]: I0320 16:24:58.122355 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jb45" event={"ID":"a235cc35-7c77-483e-ac9b-966218dcd9a8","Type":"ContainerStarted","Data":"8a9592624bd98932298ea29c114b5fa0638addc2801ff6eb39202018ee77f0df"} Mar 20 16:24:58 crc kubenswrapper[4730]: I0320 16:24:58.145757 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7jb45" podStartSLOduration=2.592540357 podStartE2EDuration="5.145740246s" podCreationTimestamp="2026-03-20 16:24:53 +0000 UTC" firstStartedPulling="2026-03-20 16:24:55.086373418 +0000 UTC m=+2754.299744807" lastFinishedPulling="2026-03-20 16:24:57.639573297 +0000 UTC m=+2756.852944696" observedRunningTime="2026-03-20 16:24:58.140972561 +0000 UTC m=+2757.354343920" watchObservedRunningTime="2026-03-20 16:24:58.145740246 +0000 UTC m=+2757.359111615" Mar 20 16:24:58 crc kubenswrapper[4730]: I0320 16:24:58.459377 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-95hgg"] Mar 20 16:24:58 crc kubenswrapper[4730]: I0320 16:24:58.461759 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-95hgg" Mar 20 16:24:58 crc kubenswrapper[4730]: I0320 16:24:58.475884 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-95hgg"] Mar 20 16:24:58 crc kubenswrapper[4730]: I0320 16:24:58.557606 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qs8s\" (UniqueName: \"kubernetes.io/projected/e69761d7-fc5c-47a9-8f69-3f89dd557b65-kube-api-access-2qs8s\") pod \"redhat-marketplace-95hgg\" (UID: \"e69761d7-fc5c-47a9-8f69-3f89dd557b65\") " pod="openshift-marketplace/redhat-marketplace-95hgg" Mar 20 16:24:58 crc kubenswrapper[4730]: I0320 16:24:58.557710 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e69761d7-fc5c-47a9-8f69-3f89dd557b65-catalog-content\") pod \"redhat-marketplace-95hgg\" (UID: \"e69761d7-fc5c-47a9-8f69-3f89dd557b65\") " pod="openshift-marketplace/redhat-marketplace-95hgg" Mar 20 16:24:58 crc kubenswrapper[4730]: I0320 16:24:58.557734 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e69761d7-fc5c-47a9-8f69-3f89dd557b65-utilities\") pod \"redhat-marketplace-95hgg\" (UID: \"e69761d7-fc5c-47a9-8f69-3f89dd557b65\") " pod="openshift-marketplace/redhat-marketplace-95hgg" Mar 20 16:24:58 crc kubenswrapper[4730]: I0320 16:24:58.659931 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qs8s\" (UniqueName: \"kubernetes.io/projected/e69761d7-fc5c-47a9-8f69-3f89dd557b65-kube-api-access-2qs8s\") pod \"redhat-marketplace-95hgg\" (UID: \"e69761d7-fc5c-47a9-8f69-3f89dd557b65\") " pod="openshift-marketplace/redhat-marketplace-95hgg" Mar 20 16:24:58 crc kubenswrapper[4730]: I0320 16:24:58.660028 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e69761d7-fc5c-47a9-8f69-3f89dd557b65-catalog-content\") pod \"redhat-marketplace-95hgg\" (UID: \"e69761d7-fc5c-47a9-8f69-3f89dd557b65\") " pod="openshift-marketplace/redhat-marketplace-95hgg" Mar 20 16:24:58 crc kubenswrapper[4730]: I0320 16:24:58.660048 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e69761d7-fc5c-47a9-8f69-3f89dd557b65-utilities\") pod \"redhat-marketplace-95hgg\" (UID: \"e69761d7-fc5c-47a9-8f69-3f89dd557b65\") " pod="openshift-marketplace/redhat-marketplace-95hgg" Mar 20 16:24:58 crc kubenswrapper[4730]: I0320 16:24:58.660578 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e69761d7-fc5c-47a9-8f69-3f89dd557b65-utilities\") pod \"redhat-marketplace-95hgg\" (UID: \"e69761d7-fc5c-47a9-8f69-3f89dd557b65\") " pod="openshift-marketplace/redhat-marketplace-95hgg" Mar 20 16:24:58 crc kubenswrapper[4730]: I0320 16:24:58.660983 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e69761d7-fc5c-47a9-8f69-3f89dd557b65-catalog-content\") pod \"redhat-marketplace-95hgg\" (UID: \"e69761d7-fc5c-47a9-8f69-3f89dd557b65\") " pod="openshift-marketplace/redhat-marketplace-95hgg" Mar 20 16:24:58 crc kubenswrapper[4730]: I0320 16:24:58.695300 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qs8s\" (UniqueName: \"kubernetes.io/projected/e69761d7-fc5c-47a9-8f69-3f89dd557b65-kube-api-access-2qs8s\") pod \"redhat-marketplace-95hgg\" (UID: \"e69761d7-fc5c-47a9-8f69-3f89dd557b65\") " pod="openshift-marketplace/redhat-marketplace-95hgg" Mar 20 16:24:58 crc kubenswrapper[4730]: I0320 16:24:58.777998 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-95hgg" Mar 20 16:24:59 crc kubenswrapper[4730]: I0320 16:24:59.263664 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-95hgg"] Mar 20 16:25:00 crc kubenswrapper[4730]: I0320 16:25:00.152093 4730 generic.go:334] "Generic (PLEG): container finished" podID="e69761d7-fc5c-47a9-8f69-3f89dd557b65" containerID="21206359c5fd0201b8b16a153309c18774b9c8d5deb6759f44c3804d3b10a4db" exitCode=0 Mar 20 16:25:00 crc kubenswrapper[4730]: I0320 16:25:00.152146 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-95hgg" event={"ID":"e69761d7-fc5c-47a9-8f69-3f89dd557b65","Type":"ContainerDied","Data":"21206359c5fd0201b8b16a153309c18774b9c8d5deb6759f44c3804d3b10a4db"} Mar 20 16:25:00 crc kubenswrapper[4730]: I0320 16:25:00.152781 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-95hgg" event={"ID":"e69761d7-fc5c-47a9-8f69-3f89dd557b65","Type":"ContainerStarted","Data":"52a0abdeee686f55ac11d96180a5e7f5807128b11f0529b546142dd1ba88cd8b"} Mar 20 16:25:01 crc kubenswrapper[4730]: I0320 16:25:01.164900 4730 generic.go:334] "Generic (PLEG): container finished" podID="e69761d7-fc5c-47a9-8f69-3f89dd557b65" containerID="aa8b3aec20a3b9706c932d80e020850878604b63ed5b242bfe7cae92232b5208" exitCode=0 Mar 20 16:25:01 crc kubenswrapper[4730]: I0320 16:25:01.165016 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-95hgg" event={"ID":"e69761d7-fc5c-47a9-8f69-3f89dd557b65","Type":"ContainerDied","Data":"aa8b3aec20a3b9706c932d80e020850878604b63ed5b242bfe7cae92232b5208"} Mar 20 16:25:03 crc kubenswrapper[4730]: I0320 16:25:03.186136 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-95hgg" event={"ID":"e69761d7-fc5c-47a9-8f69-3f89dd557b65","Type":"ContainerStarted","Data":"da51b54a8c5fcbb4fa2046cfb5cf5456b08f534d75b4795aa1c5d672bb8dcf6c"} Mar 20 16:25:03 crc kubenswrapper[4730]: I0320 16:25:03.214003 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-95hgg" podStartSLOduration=3.043675095 podStartE2EDuration="5.213981681s" podCreationTimestamp="2026-03-20 16:24:58 +0000 UTC" firstStartedPulling="2026-03-20 16:25:00.154330904 +0000 UTC m=+2759.367702273" lastFinishedPulling="2026-03-20 16:25:02.32463749 +0000 UTC m=+2761.538008859" observedRunningTime="2026-03-20 16:25:03.204239224 +0000 UTC m=+2762.417610583" watchObservedRunningTime="2026-03-20 16:25:03.213981681 +0000 UTC m=+2762.427353050" Mar 20 16:25:03 crc kubenswrapper[4730]: I0320 16:25:03.791471 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7jb45" Mar 20 16:25:03 crc kubenswrapper[4730]: I0320 16:25:03.791523 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7jb45" Mar 20 16:25:03 crc kubenswrapper[4730]: I0320 16:25:03.857533 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7jb45" Mar 20 16:25:04 crc kubenswrapper[4730]: I0320 16:25:04.248595 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7jb45" Mar 20 16:25:05 crc kubenswrapper[4730]: I0320 16:25:05.044499 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7jb45"] Mar 20 16:25:06 crc kubenswrapper[4730]: I0320 16:25:06.210645 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7jb45" podUID="a235cc35-7c77-483e-ac9b-966218dcd9a8" containerName="registry-server" containerID="cri-o://8a9592624bd98932298ea29c114b5fa0638addc2801ff6eb39202018ee77f0df" gracePeriod=2 Mar 20 16:25:07 crc kubenswrapper[4730]: I0320 16:25:07.155129 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7jb45" Mar 20 16:25:07 crc kubenswrapper[4730]: I0320 16:25:07.184949 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74b6q\" (UniqueName: \"kubernetes.io/projected/a235cc35-7c77-483e-ac9b-966218dcd9a8-kube-api-access-74b6q\") pod \"a235cc35-7c77-483e-ac9b-966218dcd9a8\" (UID: \"a235cc35-7c77-483e-ac9b-966218dcd9a8\") " Mar 20 16:25:07 crc kubenswrapper[4730]: I0320 16:25:07.185000 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a235cc35-7c77-483e-ac9b-966218dcd9a8-catalog-content\") pod \"a235cc35-7c77-483e-ac9b-966218dcd9a8\" (UID: \"a235cc35-7c77-483e-ac9b-966218dcd9a8\") " Mar 20 16:25:07 crc kubenswrapper[4730]: I0320 16:25:07.185060 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a235cc35-7c77-483e-ac9b-966218dcd9a8-utilities\") pod \"a235cc35-7c77-483e-ac9b-966218dcd9a8\" (UID: \"a235cc35-7c77-483e-ac9b-966218dcd9a8\") " Mar 20 16:25:07 crc kubenswrapper[4730]: I0320 16:25:07.186400 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a235cc35-7c77-483e-ac9b-966218dcd9a8-utilities" (OuterVolumeSpecName: "utilities") pod "a235cc35-7c77-483e-ac9b-966218dcd9a8" (UID: "a235cc35-7c77-483e-ac9b-966218dcd9a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:25:07 crc kubenswrapper[4730]: I0320 16:25:07.192663 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a235cc35-7c77-483e-ac9b-966218dcd9a8-kube-api-access-74b6q" (OuterVolumeSpecName: "kube-api-access-74b6q") pod "a235cc35-7c77-483e-ac9b-966218dcd9a8" (UID: "a235cc35-7c77-483e-ac9b-966218dcd9a8"). InnerVolumeSpecName "kube-api-access-74b6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:25:07 crc kubenswrapper[4730]: I0320 16:25:07.253724 4730 generic.go:334] "Generic (PLEG): container finished" podID="a235cc35-7c77-483e-ac9b-966218dcd9a8" containerID="8a9592624bd98932298ea29c114b5fa0638addc2801ff6eb39202018ee77f0df" exitCode=0 Mar 20 16:25:07 crc kubenswrapper[4730]: I0320 16:25:07.253782 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jb45" event={"ID":"a235cc35-7c77-483e-ac9b-966218dcd9a8","Type":"ContainerDied","Data":"8a9592624bd98932298ea29c114b5fa0638addc2801ff6eb39202018ee77f0df"} Mar 20 16:25:07 crc kubenswrapper[4730]: I0320 16:25:07.253814 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jb45" event={"ID":"a235cc35-7c77-483e-ac9b-966218dcd9a8","Type":"ContainerDied","Data":"d42cb277b6d68c8617f1da3c597f77865b9d81575ad6f2fa8dd8530cc4a65717"} Mar 20 16:25:07 crc kubenswrapper[4730]: I0320 16:25:07.253835 4730 scope.go:117] "RemoveContainer" containerID="8a9592624bd98932298ea29c114b5fa0638addc2801ff6eb39202018ee77f0df" Mar 20 16:25:07 crc kubenswrapper[4730]: I0320 16:25:07.254034 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7jb45" Mar 20 16:25:07 crc kubenswrapper[4730]: I0320 16:25:07.263368 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a235cc35-7c77-483e-ac9b-966218dcd9a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a235cc35-7c77-483e-ac9b-966218dcd9a8" (UID: "a235cc35-7c77-483e-ac9b-966218dcd9a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:25:07 crc kubenswrapper[4730]: I0320 16:25:07.285112 4730 scope.go:117] "RemoveContainer" containerID="d99d57bb87e0f3a1b3b5043e9291a7c3735d1bf0b933793d1e2984f582770d47" Mar 20 16:25:07 crc kubenswrapper[4730]: I0320 16:25:07.286747 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74b6q\" (UniqueName: \"kubernetes.io/projected/a235cc35-7c77-483e-ac9b-966218dcd9a8-kube-api-access-74b6q\") on node \"crc\" DevicePath \"\"" Mar 20 16:25:07 crc kubenswrapper[4730]: I0320 16:25:07.286767 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a235cc35-7c77-483e-ac9b-966218dcd9a8-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 16:25:07 crc kubenswrapper[4730]: I0320 16:25:07.286776 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a235cc35-7c77-483e-ac9b-966218dcd9a8-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 16:25:07 crc kubenswrapper[4730]: I0320 16:25:07.311665 4730 scope.go:117] "RemoveContainer" containerID="2fc031312adb328146003fdca6895b2fc848dfb296665101401a3ee75c2bee08" Mar 20 16:25:07 crc kubenswrapper[4730]: I0320 16:25:07.353972 4730 scope.go:117] "RemoveContainer" containerID="8a9592624bd98932298ea29c114b5fa0638addc2801ff6eb39202018ee77f0df" Mar 20 16:25:07 crc kubenswrapper[4730]: E0320 16:25:07.354450 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a9592624bd98932298ea29c114b5fa0638addc2801ff6eb39202018ee77f0df\": container with ID starting with 8a9592624bd98932298ea29c114b5fa0638addc2801ff6eb39202018ee77f0df not found: ID does not exist" containerID="8a9592624bd98932298ea29c114b5fa0638addc2801ff6eb39202018ee77f0df" Mar 20 16:25:07 crc kubenswrapper[4730]: I0320 16:25:07.354575 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a9592624bd98932298ea29c114b5fa0638addc2801ff6eb39202018ee77f0df"} err="failed to get container status \"8a9592624bd98932298ea29c114b5fa0638addc2801ff6eb39202018ee77f0df\": rpc error: code = NotFound desc = could not find container \"8a9592624bd98932298ea29c114b5fa0638addc2801ff6eb39202018ee77f0df\": container with ID starting with 8a9592624bd98932298ea29c114b5fa0638addc2801ff6eb39202018ee77f0df not found: ID does not exist" Mar 20 16:25:07 crc kubenswrapper[4730]: I0320 16:25:07.354662 4730 scope.go:117] "RemoveContainer" containerID="d99d57bb87e0f3a1b3b5043e9291a7c3735d1bf0b933793d1e2984f582770d47" Mar 20 16:25:07 crc kubenswrapper[4730]: E0320 16:25:07.355118 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d99d57bb87e0f3a1b3b5043e9291a7c3735d1bf0b933793d1e2984f582770d47\": container with ID starting with d99d57bb87e0f3a1b3b5043e9291a7c3735d1bf0b933793d1e2984f582770d47 not found: ID does not exist" containerID="d99d57bb87e0f3a1b3b5043e9291a7c3735d1bf0b933793d1e2984f582770d47" Mar 20 16:25:07 crc kubenswrapper[4730]: I0320 16:25:07.355223 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d99d57bb87e0f3a1b3b5043e9291a7c3735d1bf0b933793d1e2984f582770d47"} err="failed to get container status \"d99d57bb87e0f3a1b3b5043e9291a7c3735d1bf0b933793d1e2984f582770d47\": rpc error: code = NotFound desc = could not find container \"d99d57bb87e0f3a1b3b5043e9291a7c3735d1bf0b933793d1e2984f582770d47\": container with ID starting with d99d57bb87e0f3a1b3b5043e9291a7c3735d1bf0b933793d1e2984f582770d47 not found: ID does not exist" Mar 20 16:25:07 crc kubenswrapper[4730]: I0320 16:25:07.355314 4730 scope.go:117] "RemoveContainer" containerID="2fc031312adb328146003fdca6895b2fc848dfb296665101401a3ee75c2bee08" Mar 20 16:25:07 crc kubenswrapper[4730]: E0320 16:25:07.355794 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fc031312adb328146003fdca6895b2fc848dfb296665101401a3ee75c2bee08\": container with ID starting with 2fc031312adb328146003fdca6895b2fc848dfb296665101401a3ee75c2bee08 not found: ID does not exist" containerID="2fc031312adb328146003fdca6895b2fc848dfb296665101401a3ee75c2bee08" Mar 20 16:25:07 crc kubenswrapper[4730]: I0320 16:25:07.355849 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fc031312adb328146003fdca6895b2fc848dfb296665101401a3ee75c2bee08"} err="failed to get container status \"2fc031312adb328146003fdca6895b2fc848dfb296665101401a3ee75c2bee08\": rpc error: code = NotFound desc = could not find container \"2fc031312adb328146003fdca6895b2fc848dfb296665101401a3ee75c2bee08\": container with ID starting with 2fc031312adb328146003fdca6895b2fc848dfb296665101401a3ee75c2bee08 not found: ID does not exist" Mar 20 16:25:07 crc kubenswrapper[4730]: I0320 16:25:07.582365 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7jb45"] Mar 20 16:25:07 crc kubenswrapper[4730]: I0320 16:25:07.591231 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7jb45"] Mar 20 16:25:08 crc kubenswrapper[4730]: I0320 16:25:08.533748 4730 scope.go:117] "RemoveContainer" containerID="7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213" Mar 20 16:25:08 crc kubenswrapper[4730]: E0320 16:25:08.534209 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:25:08 crc kubenswrapper[4730]: I0320 16:25:08.778460 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-95hgg" Mar 20 16:25:08 crc kubenswrapper[4730]: I0320 16:25:08.778537 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-95hgg" Mar 20 16:25:08 crc kubenswrapper[4730]: I0320 16:25:08.832823 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-95hgg" Mar 20 16:25:09 crc kubenswrapper[4730]: I0320 16:25:09.319610 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-95hgg" Mar 20 16:25:09 crc kubenswrapper[4730]: I0320 16:25:09.545279 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a235cc35-7c77-483e-ac9b-966218dcd9a8" path="/var/lib/kubelet/pods/a235cc35-7c77-483e-ac9b-966218dcd9a8/volumes" Mar 20 16:25:11 crc kubenswrapper[4730]: I0320 16:25:11.444357 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-95hgg"] Mar 20 16:25:11 crc kubenswrapper[4730]: I0320 16:25:11.444576 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-95hgg" podUID="e69761d7-fc5c-47a9-8f69-3f89dd557b65" containerName="registry-server" containerID="cri-o://da51b54a8c5fcbb4fa2046cfb5cf5456b08f534d75b4795aa1c5d672bb8dcf6c" gracePeriod=2 Mar 20 16:25:11 crc kubenswrapper[4730]: I0320 16:25:11.919782 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-95hgg" Mar 20 16:25:12 crc kubenswrapper[4730]: I0320 16:25:12.099411 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qs8s\" (UniqueName: \"kubernetes.io/projected/e69761d7-fc5c-47a9-8f69-3f89dd557b65-kube-api-access-2qs8s\") pod \"e69761d7-fc5c-47a9-8f69-3f89dd557b65\" (UID: \"e69761d7-fc5c-47a9-8f69-3f89dd557b65\") " Mar 20 16:25:12 crc kubenswrapper[4730]: I0320 16:25:12.099479 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e69761d7-fc5c-47a9-8f69-3f89dd557b65-utilities\") pod \"e69761d7-fc5c-47a9-8f69-3f89dd557b65\" (UID: \"e69761d7-fc5c-47a9-8f69-3f89dd557b65\") " Mar 20 16:25:12 crc kubenswrapper[4730]: I0320 16:25:12.099681 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e69761d7-fc5c-47a9-8f69-3f89dd557b65-catalog-content\") pod \"e69761d7-fc5c-47a9-8f69-3f89dd557b65\" (UID: \"e69761d7-fc5c-47a9-8f69-3f89dd557b65\") " Mar 20 16:25:12 crc kubenswrapper[4730]: I0320 16:25:12.100484 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e69761d7-fc5c-47a9-8f69-3f89dd557b65-utilities" (OuterVolumeSpecName: "utilities") pod "e69761d7-fc5c-47a9-8f69-3f89dd557b65" (UID: "e69761d7-fc5c-47a9-8f69-3f89dd557b65"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:25:12 crc kubenswrapper[4730]: I0320 16:25:12.108435 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e69761d7-fc5c-47a9-8f69-3f89dd557b65-kube-api-access-2qs8s" (OuterVolumeSpecName: "kube-api-access-2qs8s") pod "e69761d7-fc5c-47a9-8f69-3f89dd557b65" (UID: "e69761d7-fc5c-47a9-8f69-3f89dd557b65"). InnerVolumeSpecName "kube-api-access-2qs8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:25:12 crc kubenswrapper[4730]: I0320 16:25:12.126356 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e69761d7-fc5c-47a9-8f69-3f89dd557b65-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e69761d7-fc5c-47a9-8f69-3f89dd557b65" (UID: "e69761d7-fc5c-47a9-8f69-3f89dd557b65"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:25:12 crc kubenswrapper[4730]: I0320 16:25:12.201910 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e69761d7-fc5c-47a9-8f69-3f89dd557b65-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 16:25:12 crc kubenswrapper[4730]: I0320 16:25:12.201945 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qs8s\" (UniqueName: \"kubernetes.io/projected/e69761d7-fc5c-47a9-8f69-3f89dd557b65-kube-api-access-2qs8s\") on node \"crc\" DevicePath \"\"" Mar 20 16:25:12 crc kubenswrapper[4730]: I0320 16:25:12.201959 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e69761d7-fc5c-47a9-8f69-3f89dd557b65-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 16:25:12 crc kubenswrapper[4730]: I0320 16:25:12.303303 4730 generic.go:334] "Generic (PLEG): container finished" podID="e69761d7-fc5c-47a9-8f69-3f89dd557b65" containerID="da51b54a8c5fcbb4fa2046cfb5cf5456b08f534d75b4795aa1c5d672bb8dcf6c" exitCode=0 Mar 20 16:25:12 crc kubenswrapper[4730]: I0320 16:25:12.303346 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-95hgg" event={"ID":"e69761d7-fc5c-47a9-8f69-3f89dd557b65","Type":"ContainerDied","Data":"da51b54a8c5fcbb4fa2046cfb5cf5456b08f534d75b4795aa1c5d672bb8dcf6c"} Mar 20 16:25:12 crc kubenswrapper[4730]: I0320 16:25:12.303372 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-95hgg" event={"ID":"e69761d7-fc5c-47a9-8f69-3f89dd557b65","Type":"ContainerDied","Data":"52a0abdeee686f55ac11d96180a5e7f5807128b11f0529b546142dd1ba88cd8b"} Mar 20 16:25:12 crc kubenswrapper[4730]: I0320 16:25:12.303375 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-95hgg" Mar 20 16:25:12 crc kubenswrapper[4730]: I0320 16:25:12.303400 4730 scope.go:117] "RemoveContainer" containerID="da51b54a8c5fcbb4fa2046cfb5cf5456b08f534d75b4795aa1c5d672bb8dcf6c" Mar 20 16:25:12 crc kubenswrapper[4730]: I0320 16:25:12.336589 4730 scope.go:117] "RemoveContainer" containerID="aa8b3aec20a3b9706c932d80e020850878604b63ed5b242bfe7cae92232b5208" Mar 20 16:25:12 crc kubenswrapper[4730]: I0320 16:25:12.342951 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-95hgg"] Mar 20 16:25:12 crc kubenswrapper[4730]: I0320 16:25:12.351430 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-95hgg"] Mar 20 16:25:12 crc kubenswrapper[4730]: I0320 16:25:12.358310 4730 scope.go:117] "RemoveContainer" containerID="21206359c5fd0201b8b16a153309c18774b9c8d5deb6759f44c3804d3b10a4db" Mar 20 16:25:12 crc kubenswrapper[4730]: I0320 16:25:12.403116 4730 scope.go:117] "RemoveContainer" containerID="da51b54a8c5fcbb4fa2046cfb5cf5456b08f534d75b4795aa1c5d672bb8dcf6c" Mar 20 16:25:12 crc kubenswrapper[4730]: E0320 16:25:12.403662 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da51b54a8c5fcbb4fa2046cfb5cf5456b08f534d75b4795aa1c5d672bb8dcf6c\": container with ID starting with da51b54a8c5fcbb4fa2046cfb5cf5456b08f534d75b4795aa1c5d672bb8dcf6c not found: ID does not exist" containerID="da51b54a8c5fcbb4fa2046cfb5cf5456b08f534d75b4795aa1c5d672bb8dcf6c" Mar 20 16:25:12 crc kubenswrapper[4730]: I0320 16:25:12.403703 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da51b54a8c5fcbb4fa2046cfb5cf5456b08f534d75b4795aa1c5d672bb8dcf6c"} err="failed to get container status \"da51b54a8c5fcbb4fa2046cfb5cf5456b08f534d75b4795aa1c5d672bb8dcf6c\": rpc error: code = NotFound desc = could not find container \"da51b54a8c5fcbb4fa2046cfb5cf5456b08f534d75b4795aa1c5d672bb8dcf6c\": container with ID starting with da51b54a8c5fcbb4fa2046cfb5cf5456b08f534d75b4795aa1c5d672bb8dcf6c not found: ID does not exist" Mar 20 16:25:12 crc kubenswrapper[4730]: I0320 16:25:12.403729 4730 scope.go:117] "RemoveContainer" containerID="aa8b3aec20a3b9706c932d80e020850878604b63ed5b242bfe7cae92232b5208" Mar 20 16:25:12 crc kubenswrapper[4730]: E0320 16:25:12.404117 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa8b3aec20a3b9706c932d80e020850878604b63ed5b242bfe7cae92232b5208\": container with ID starting with aa8b3aec20a3b9706c932d80e020850878604b63ed5b242bfe7cae92232b5208 not found: ID does not exist" containerID="aa8b3aec20a3b9706c932d80e020850878604b63ed5b242bfe7cae92232b5208" Mar 20 16:25:12 crc kubenswrapper[4730]: I0320 16:25:12.404134 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa8b3aec20a3b9706c932d80e020850878604b63ed5b242bfe7cae92232b5208"} err="failed to get container status \"aa8b3aec20a3b9706c932d80e020850878604b63ed5b242bfe7cae92232b5208\": rpc error: code = NotFound desc = could not find container \"aa8b3aec20a3b9706c932d80e020850878604b63ed5b242bfe7cae92232b5208\": container with ID starting with aa8b3aec20a3b9706c932d80e020850878604b63ed5b242bfe7cae92232b5208 not found: ID does not exist" Mar 20 16:25:12 crc kubenswrapper[4730]: I0320 16:25:12.404145 4730 scope.go:117] "RemoveContainer" containerID="21206359c5fd0201b8b16a153309c18774b9c8d5deb6759f44c3804d3b10a4db" Mar 20 16:25:12 crc kubenswrapper[4730]: E0320 16:25:12.404423 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21206359c5fd0201b8b16a153309c18774b9c8d5deb6759f44c3804d3b10a4db\": container with ID starting with 21206359c5fd0201b8b16a153309c18774b9c8d5deb6759f44c3804d3b10a4db not found: ID does not exist" containerID="21206359c5fd0201b8b16a153309c18774b9c8d5deb6759f44c3804d3b10a4db" Mar 20 16:25:12 crc kubenswrapper[4730]: I0320 16:25:12.404463 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21206359c5fd0201b8b16a153309c18774b9c8d5deb6759f44c3804d3b10a4db"} err="failed to get container status \"21206359c5fd0201b8b16a153309c18774b9c8d5deb6759f44c3804d3b10a4db\": rpc error: code = NotFound desc = could not find container \"21206359c5fd0201b8b16a153309c18774b9c8d5deb6759f44c3804d3b10a4db\": container with ID starting with 21206359c5fd0201b8b16a153309c18774b9c8d5deb6759f44c3804d3b10a4db not found: ID does not exist" Mar 20 16:25:13 crc kubenswrapper[4730]: I0320 16:25:13.542954 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e69761d7-fc5c-47a9-8f69-3f89dd557b65" path="/var/lib/kubelet/pods/e69761d7-fc5c-47a9-8f69-3f89dd557b65/volumes" Mar 20 16:25:20 crc kubenswrapper[4730]: I0320 16:25:20.533419 4730 scope.go:117] "RemoveContainer" containerID="7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213" Mar 20 16:25:20 crc kubenswrapper[4730]: E0320 16:25:20.534334 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:25:33 crc kubenswrapper[4730]: I0320 16:25:33.533037 4730 scope.go:117] "RemoveContainer" containerID="7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213" Mar 20 16:25:33 crc kubenswrapper[4730]: E0320 16:25:33.533740 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:25:34 crc kubenswrapper[4730]: I0320 16:25:34.504540 4730 generic.go:334] "Generic (PLEG): container finished" podID="884c2fa6-babb-44b8-b8e2-3e4fbce27153" containerID="4f23a2d6263e179fbb42b831d0fa59c0e1411d46b87becc3c39bd7883db89197" exitCode=0 Mar 20 16:25:34 crc kubenswrapper[4730]: I0320 16:25:34.504627 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" event={"ID":"884c2fa6-babb-44b8-b8e2-3e4fbce27153","Type":"ContainerDied","Data":"4f23a2d6263e179fbb42b831d0fa59c0e1411d46b87becc3c39bd7883db89197"} Mar 20 16:25:35 crc kubenswrapper[4730]: I0320 16:25:35.977942 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" Mar 20 16:25:36 crc kubenswrapper[4730]: I0320 16:25:36.052023 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-ssh-key-openstack-edpm-ipam\") pod \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\" (UID: \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\") " Mar 20 16:25:36 crc kubenswrapper[4730]: I0320 16:25:36.052105 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-inventory\") pod \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\" (UID: \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\") " Mar 20 16:25:36 crc kubenswrapper[4730]: I0320 16:25:36.052172 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-ceilometer-compute-config-data-1\") pod \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\" (UID: \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\") " Mar 20 16:25:36 crc kubenswrapper[4730]: I0320 16:25:36.052222 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-ceilometer-compute-config-data-0\") pod \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\" (UID: \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\") " Mar 20 16:25:36 crc kubenswrapper[4730]: I0320 16:25:36.052284 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-telemetry-combined-ca-bundle\") pod \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\" (UID: \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\") " Mar 20 16:25:36 crc kubenswrapper[4730]: I0320 16:25:36.052324 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-ceilometer-compute-config-data-2\") pod \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\" (UID: \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\") " Mar 20 16:25:36 crc kubenswrapper[4730]: I0320 16:25:36.052366 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vccl6\" (UniqueName: \"kubernetes.io/projected/884c2fa6-babb-44b8-b8e2-3e4fbce27153-kube-api-access-vccl6\") pod \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\" (UID: \"884c2fa6-babb-44b8-b8e2-3e4fbce27153\") " Mar 20 16:25:36 crc kubenswrapper[4730]: I0320 16:25:36.071304 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "884c2fa6-babb-44b8-b8e2-3e4fbce27153" (UID: "884c2fa6-babb-44b8-b8e2-3e4fbce27153"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:25:36 crc kubenswrapper[4730]: I0320 16:25:36.071633 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/884c2fa6-babb-44b8-b8e2-3e4fbce27153-kube-api-access-vccl6" (OuterVolumeSpecName: "kube-api-access-vccl6") pod "884c2fa6-babb-44b8-b8e2-3e4fbce27153" (UID: "884c2fa6-babb-44b8-b8e2-3e4fbce27153"). InnerVolumeSpecName "kube-api-access-vccl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:25:36 crc kubenswrapper[4730]: I0320 16:25:36.082423 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "884c2fa6-babb-44b8-b8e2-3e4fbce27153" (UID: "884c2fa6-babb-44b8-b8e2-3e4fbce27153"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:25:36 crc kubenswrapper[4730]: I0320 16:25:36.082487 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "884c2fa6-babb-44b8-b8e2-3e4fbce27153" (UID: "884c2fa6-babb-44b8-b8e2-3e4fbce27153"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:25:36 crc kubenswrapper[4730]: I0320 16:25:36.083725 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "884c2fa6-babb-44b8-b8e2-3e4fbce27153" (UID: "884c2fa6-babb-44b8-b8e2-3e4fbce27153"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:25:36 crc kubenswrapper[4730]: I0320 16:25:36.084187 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-inventory" (OuterVolumeSpecName: "inventory") pod "884c2fa6-babb-44b8-b8e2-3e4fbce27153" (UID: "884c2fa6-babb-44b8-b8e2-3e4fbce27153"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:25:36 crc kubenswrapper[4730]: I0320 16:25:36.093483 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "884c2fa6-babb-44b8-b8e2-3e4fbce27153" (UID: "884c2fa6-babb-44b8-b8e2-3e4fbce27153"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:25:36 crc kubenswrapper[4730]: I0320 16:25:36.154339 4730 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 20 16:25:36 crc kubenswrapper[4730]: I0320 16:25:36.154741 4730 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-inventory\") on node \"crc\" DevicePath \"\"" Mar 20 16:25:36 crc kubenswrapper[4730]: I0320 16:25:36.154816 4730 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 20 16:25:36 crc kubenswrapper[4730]: I0320 16:25:36.154909 4730 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 20 16:25:36 crc kubenswrapper[4730]: I0320 16:25:36.154981 4730 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:25:36 crc kubenswrapper[4730]: I0320 16:25:36.155049 4730 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/884c2fa6-babb-44b8-b8e2-3e4fbce27153-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Mar 20 16:25:36 crc kubenswrapper[4730]: I0320 16:25:36.155115 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vccl6\" (UniqueName: \"kubernetes.io/projected/884c2fa6-babb-44b8-b8e2-3e4fbce27153-kube-api-access-vccl6\") on node \"crc\" DevicePath \"\"" Mar 20 16:25:36 crc kubenswrapper[4730]: I0320 16:25:36.553821 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" event={"ID":"884c2fa6-babb-44b8-b8e2-3e4fbce27153","Type":"ContainerDied","Data":"7ab2683493c806bbc0a72a38ad816fd90029738a822e7670b5034e10e4d79b42"} Mar 20 16:25:36 crc kubenswrapper[4730]: I0320 16:25:36.553871 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ab2683493c806bbc0a72a38ad816fd90029738a822e7670b5034e10e4d79b42" Mar 20 16:25:36 crc kubenswrapper[4730]: I0320 16:25:36.553925 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq" Mar 20 16:25:48 crc kubenswrapper[4730]: I0320 16:25:48.532732 4730 scope.go:117] "RemoveContainer" containerID="7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213" Mar 20 16:25:49 crc kubenswrapper[4730]: I0320 16:25:49.664097 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerStarted","Data":"f26b9418791737b63a44943ab58f9d5995a9697abde137f76404e07e867c6e5f"} Mar 20 16:26:00 crc kubenswrapper[4730]: I0320 16:26:00.141678 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567066-7w6j9"] Mar 20 16:26:00 crc kubenswrapper[4730]: E0320 16:26:00.143781 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a235cc35-7c77-483e-ac9b-966218dcd9a8" containerName="registry-server" Mar 20 16:26:00 crc kubenswrapper[4730]: I0320 16:26:00.143878 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="a235cc35-7c77-483e-ac9b-966218dcd9a8" containerName="registry-server" Mar 20 16:26:00 crc kubenswrapper[4730]: E0320 16:26:00.143959 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="884c2fa6-babb-44b8-b8e2-3e4fbce27153" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 20 16:26:00 crc kubenswrapper[4730]: I0320 16:26:00.144027 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="884c2fa6-babb-44b8-b8e2-3e4fbce27153" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 20 16:26:00 crc kubenswrapper[4730]: E0320 16:26:00.144136 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e69761d7-fc5c-47a9-8f69-3f89dd557b65" containerName="registry-server" Mar 20 16:26:00 crc kubenswrapper[4730]: I0320 16:26:00.144207 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e69761d7-fc5c-47a9-8f69-3f89dd557b65" containerName="registry-server" Mar 20 16:26:00 crc kubenswrapper[4730]: E0320 16:26:00.144315 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e69761d7-fc5c-47a9-8f69-3f89dd557b65" containerName="extract-utilities" Mar 20 16:26:00 crc kubenswrapper[4730]: I0320 16:26:00.144387 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e69761d7-fc5c-47a9-8f69-3f89dd557b65" containerName="extract-utilities" Mar 20 16:26:00 crc kubenswrapper[4730]: E0320 16:26:00.144460 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a235cc35-7c77-483e-ac9b-966218dcd9a8" containerName="extract-utilities" Mar 20 16:26:00 crc kubenswrapper[4730]: I0320 16:26:00.144524 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="a235cc35-7c77-483e-ac9b-966218dcd9a8" containerName="extract-utilities" Mar 20 16:26:00 crc kubenswrapper[4730]: E0320 16:26:00.144600 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a235cc35-7c77-483e-ac9b-966218dcd9a8" containerName="extract-content" Mar 20 16:26:00 crc kubenswrapper[4730]: I0320 16:26:00.144666 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="a235cc35-7c77-483e-ac9b-966218dcd9a8" containerName="extract-content" Mar 20 16:26:00 crc kubenswrapper[4730]: E0320 16:26:00.144738 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e69761d7-fc5c-47a9-8f69-3f89dd557b65" containerName="extract-content" Mar 20 16:26:00 crc kubenswrapper[4730]: I0320 16:26:00.144805 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e69761d7-fc5c-47a9-8f69-3f89dd557b65" containerName="extract-content" Mar 20 16:26:00 crc kubenswrapper[4730]: I0320 16:26:00.145118 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="884c2fa6-babb-44b8-b8e2-3e4fbce27153" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 20 16:26:00 crc kubenswrapper[4730]: I0320 16:26:00.145217 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="e69761d7-fc5c-47a9-8f69-3f89dd557b65" containerName="registry-server" Mar 20 16:26:00 crc kubenswrapper[4730]: I0320 16:26:00.145312 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="a235cc35-7c77-483e-ac9b-966218dcd9a8" containerName="registry-server" Mar 20 16:26:00 crc kubenswrapper[4730]: I0320 16:26:00.146209 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567066-7w6j9" Mar 20 16:26:00 crc kubenswrapper[4730]: I0320 16:26:00.148638 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:26:00 crc kubenswrapper[4730]: I0320 16:26:00.148930 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:26:00 crc kubenswrapper[4730]: I0320 16:26:00.149044 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:26:00 crc kubenswrapper[4730]: I0320 16:26:00.149640 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567066-7w6j9"] Mar 20 16:26:00 crc kubenswrapper[4730]: I0320 16:26:00.225815 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5ghs\" (UniqueName: \"kubernetes.io/projected/2996ff7e-454f-40d8-bc5c-894c45b7a58c-kube-api-access-b5ghs\") pod \"auto-csr-approver-29567066-7w6j9\" (UID: \"2996ff7e-454f-40d8-bc5c-894c45b7a58c\") " pod="openshift-infra/auto-csr-approver-29567066-7w6j9" Mar 20 16:26:00 crc kubenswrapper[4730]: I0320 16:26:00.327849 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5ghs\" (UniqueName: \"kubernetes.io/projected/2996ff7e-454f-40d8-bc5c-894c45b7a58c-kube-api-access-b5ghs\") pod \"auto-csr-approver-29567066-7w6j9\" (UID: \"2996ff7e-454f-40d8-bc5c-894c45b7a58c\") " pod="openshift-infra/auto-csr-approver-29567066-7w6j9" Mar 20 16:26:00 crc kubenswrapper[4730]: I0320 16:26:00.348329 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5ghs\" (UniqueName: \"kubernetes.io/projected/2996ff7e-454f-40d8-bc5c-894c45b7a58c-kube-api-access-b5ghs\") pod \"auto-csr-approver-29567066-7w6j9\" (UID: \"2996ff7e-454f-40d8-bc5c-894c45b7a58c\") " pod="openshift-infra/auto-csr-approver-29567066-7w6j9" Mar 20 16:26:00 crc kubenswrapper[4730]: I0320 16:26:00.464808 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567066-7w6j9" Mar 20 16:26:00 crc kubenswrapper[4730]: I0320 16:26:00.925043 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567066-7w6j9"] Mar 20 16:26:01 crc kubenswrapper[4730]: I0320 16:26:01.781138 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567066-7w6j9" event={"ID":"2996ff7e-454f-40d8-bc5c-894c45b7a58c","Type":"ContainerStarted","Data":"0b348ada8776d11aa72321a4e5c9a7e6f704ae7f87da09179dd7eb2ae8dcbedb"} Mar 20 16:26:02 crc kubenswrapper[4730]: I0320 16:26:02.790747 4730 generic.go:334] "Generic (PLEG): container finished" podID="2996ff7e-454f-40d8-bc5c-894c45b7a58c" containerID="1bc53c769c23ddf21b26ded07f20d42f23d89029cfc127b5683d217f18b840d6" exitCode=0 Mar 20 16:26:02 crc kubenswrapper[4730]: I0320 16:26:02.790924 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567066-7w6j9" event={"ID":"2996ff7e-454f-40d8-bc5c-894c45b7a58c","Type":"ContainerDied","Data":"1bc53c769c23ddf21b26ded07f20d42f23d89029cfc127b5683d217f18b840d6"} Mar 20 16:26:04 crc kubenswrapper[4730]: I0320 16:26:04.327505 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567066-7w6j9" Mar 20 16:26:04 crc kubenswrapper[4730]: I0320 16:26:04.411094 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5ghs\" (UniqueName: \"kubernetes.io/projected/2996ff7e-454f-40d8-bc5c-894c45b7a58c-kube-api-access-b5ghs\") pod \"2996ff7e-454f-40d8-bc5c-894c45b7a58c\" (UID: \"2996ff7e-454f-40d8-bc5c-894c45b7a58c\") " Mar 20 16:26:04 crc kubenswrapper[4730]: I0320 16:26:04.415779 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2996ff7e-454f-40d8-bc5c-894c45b7a58c-kube-api-access-b5ghs" (OuterVolumeSpecName: "kube-api-access-b5ghs") pod "2996ff7e-454f-40d8-bc5c-894c45b7a58c" (UID: "2996ff7e-454f-40d8-bc5c-894c45b7a58c"). InnerVolumeSpecName "kube-api-access-b5ghs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:26:04 crc kubenswrapper[4730]: I0320 16:26:04.513689 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5ghs\" (UniqueName: \"kubernetes.io/projected/2996ff7e-454f-40d8-bc5c-894c45b7a58c-kube-api-access-b5ghs\") on node \"crc\" DevicePath \"\"" Mar 20 16:26:04 crc kubenswrapper[4730]: I0320 16:26:04.837858 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567066-7w6j9" event={"ID":"2996ff7e-454f-40d8-bc5c-894c45b7a58c","Type":"ContainerDied","Data":"0b348ada8776d11aa72321a4e5c9a7e6f704ae7f87da09179dd7eb2ae8dcbedb"} Mar 20 16:26:04 crc kubenswrapper[4730]: I0320 16:26:04.837897 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b348ada8776d11aa72321a4e5c9a7e6f704ae7f87da09179dd7eb2ae8dcbedb" Mar 20 16:26:04 crc kubenswrapper[4730]: I0320 16:26:04.837909 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567066-7w6j9" Mar 20 16:26:05 crc kubenswrapper[4730]: I0320 16:26:05.420391 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567060-dbzpk"] Mar 20 16:26:05 crc kubenswrapper[4730]: I0320 16:26:05.442270 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567060-dbzpk"] Mar 20 16:26:05 crc kubenswrapper[4730]: I0320 16:26:05.545380 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95b03afb-153e-4f6b-a88a-e64d8a889b97" path="/var/lib/kubelet/pods/95b03afb-153e-4f6b-a88a-e64d8a889b97/volumes" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.050988 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Mar 20 16:26:11 crc kubenswrapper[4730]: E0320 16:26:11.053145 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2996ff7e-454f-40d8-bc5c-894c45b7a58c" containerName="oc" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.053180 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="2996ff7e-454f-40d8-bc5c-894c45b7a58c" containerName="oc" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.053421 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="2996ff7e-454f-40d8-bc5c-894c45b7a58c" containerName="oc" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.054442 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.061082 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.091690 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.104881 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-0"] Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.107155 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.109092 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-config-data" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.133399 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bde1710-3861-42cb-8647-292785ee4392-scripts\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.133454 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.133507 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bde1710-3861-42cb-8647-292785ee4392-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.133538 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.133584 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-lib-modules\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.133605 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-dev\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.133624 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bde1710-3861-42cb-8647-292785ee4392-config-data\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.133646 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-etc-nvme\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.133665 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.133707 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.133735 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.133752 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-run\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.133777 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-sys\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.133801 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0bde1710-3861-42cb-8647-292785ee4392-config-data-custom\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.133822 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h764b\" (UniqueName: \"kubernetes.io/projected/0bde1710-3861-42cb-8647-292785ee4392-kube-api-access-h764b\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.140108 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.215562 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.217178 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.223625 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-2-config-data" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.228568 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235100 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-dev\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235155 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz5n7\" (UniqueName: \"kubernetes.io/projected/436a7a40-7823-4670-a107-ff5ca02da822-kube-api-access-vz5n7\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235178 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bde1710-3861-42cb-8647-292785ee4392-config-data\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235199 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235217 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/436a7a40-7823-4670-a107-ff5ca02da822-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235235 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-etc-nvme\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235270 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235307 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/436a7a40-7823-4670-a107-ff5ca02da822-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235328 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235353 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/436a7a40-7823-4670-a107-ff5ca02da822-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235373 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235388 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-run\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235409 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235445 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-dev\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235467 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-sys\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235491 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0bde1710-3861-42cb-8647-292785ee4392-config-data-custom\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235506 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-sys\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235522 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h764b\" (UniqueName: \"kubernetes.io/projected/0bde1710-3861-42cb-8647-292785ee4392-kube-api-access-h764b\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235542 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-run\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235558 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235578 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bde1710-3861-42cb-8647-292785ee4392-scripts\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235595 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235612 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235658 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bde1710-3861-42cb-8647-292785ee4392-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235674 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/436a7a40-7823-4670-a107-ff5ca02da822-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235694 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235711 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235733 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235757 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235773 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-lib-modules\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235853 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-lib-modules\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.235887 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-dev\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.238798 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.241859 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.242155 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.242338 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.242382 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-run\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.242417 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.242471 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-sys\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.246339 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0bde1710-3861-42cb-8647-292785ee4392-etc-nvme\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.247081 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0bde1710-3861-42cb-8647-292785ee4392-config-data-custom\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.247317 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bde1710-3861-42cb-8647-292785ee4392-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.261402 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bde1710-3861-42cb-8647-292785ee4392-scripts\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.268237 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bde1710-3861-42cb-8647-292785ee4392-config-data\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.278017 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h764b\" (UniqueName: \"kubernetes.io/projected/0bde1710-3861-42cb-8647-292785ee4392-kube-api-access-h764b\") pod \"cinder-backup-0\" (UID: \"0bde1710-3861-42cb-8647-292785ee4392\") " pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.338164 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.338230 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.338285 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.338313 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-dev\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.338363 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4b6z\" (UniqueName: \"kubernetes.io/projected/20675030-52b7-4f1d-b087-d7703a59f5e1-kube-api-access-t4b6z\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.338387 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-sys\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.338420 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-run\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.338446 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.338474 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.338506 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20675030-52b7-4f1d-b087-d7703a59f5e1-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.338535 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.338580 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.338613 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.338637 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/436a7a40-7823-4670-a107-ff5ca02da822-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.338669 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.338697 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.338728 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.338756 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.338783 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.338815 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.338840 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20675030-52b7-4f1d-b087-d7703a59f5e1-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.338861 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20675030-52b7-4f1d-b087-d7703a59f5e1-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.338900 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20675030-52b7-4f1d-b087-d7703a59f5e1-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.338922 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz5n7\" (UniqueName: \"kubernetes.io/projected/436a7a40-7823-4670-a107-ff5ca02da822-kube-api-access-vz5n7\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.338951 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.338973 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/436a7a40-7823-4670-a107-ff5ca02da822-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.339022 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.339054 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.339082 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/436a7a40-7823-4670-a107-ff5ca02da822-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.339135 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/436a7a40-7823-4670-a107-ff5ca02da822-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.342410 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.342509 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.342544 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-dev\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.342580 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-sys\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.342606 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-run\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.342652 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.342694 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.343490 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.343572 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.343825 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/436a7a40-7823-4670-a107-ff5ca02da822-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.346987 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/436a7a40-7823-4670-a107-ff5ca02da822-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.347416 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/436a7a40-7823-4670-a107-ff5ca02da822-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.354273 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/436a7a40-7823-4670-a107-ff5ca02da822-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.355035 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/436a7a40-7823-4670-a107-ff5ca02da822-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.361857 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz5n7\" (UniqueName: \"kubernetes.io/projected/436a7a40-7823-4670-a107-ff5ca02da822-kube-api-access-vz5n7\") pod \"cinder-volume-nfs-0\" (UID: \"436a7a40-7823-4670-a107-ff5ca02da822\") " pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.372057 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.423854 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.440759 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.440816 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.440871 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.440888 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.440914 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20675030-52b7-4f1d-b087-d7703a59f5e1-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.440923 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.440976 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.440980 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.440930 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20675030-52b7-4f1d-b087-d7703a59f5e1-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.441022 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.441067 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20675030-52b7-4f1d-b087-d7703a59f5e1-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.441266 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.441310 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.441403 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.441440 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.441513 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4b6z\" (UniqueName: \"kubernetes.io/projected/20675030-52b7-4f1d-b087-d7703a59f5e1-kube-api-access-t4b6z\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.441589 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.441629 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20675030-52b7-4f1d-b087-d7703a59f5e1-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.441643 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.441679 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.441702 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.441701 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.441825 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.441974 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.442012 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/20675030-52b7-4f1d-b087-d7703a59f5e1-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.445515 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20675030-52b7-4f1d-b087-d7703a59f5e1-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.446550 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20675030-52b7-4f1d-b087-d7703a59f5e1-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.450210 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20675030-52b7-4f1d-b087-d7703a59f5e1-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.451333 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20675030-52b7-4f1d-b087-d7703a59f5e1-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.461002 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4b6z\" (UniqueName: \"kubernetes.io/projected/20675030-52b7-4f1d-b087-d7703a59f5e1-kube-api-access-t4b6z\") pod \"cinder-volume-nfs-2-0\" (UID: \"20675030-52b7-4f1d-b087-d7703a59f5e1\") " pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:11 crc kubenswrapper[4730]: I0320 16:26:11.548893 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:12 crc kubenswrapper[4730]: I0320 16:26:12.075732 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Mar 20 16:26:12 crc kubenswrapper[4730]: I0320 16:26:12.199495 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Mar 20 16:26:12 crc kubenswrapper[4730]: I0320 16:26:12.317983 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Mar 20 16:26:12 crc kubenswrapper[4730]: I0320 16:26:12.924304 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"0bde1710-3861-42cb-8647-292785ee4392","Type":"ContainerStarted","Data":"685c1d13a0db4e2ea0fc6d07cdce729cef52e4b3b6f558f92d6a241eb761edaf"} Mar 20 16:26:12 crc kubenswrapper[4730]: I0320 16:26:12.925607 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"20675030-52b7-4f1d-b087-d7703a59f5e1","Type":"ContainerStarted","Data":"230f866f4bbf8aaf3b8b2e46381fba766c863e7ffc2853c343419bf0a04c7356"} Mar 20 16:26:12 crc kubenswrapper[4730]: I0320 16:26:12.928528 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"436a7a40-7823-4670-a107-ff5ca02da822","Type":"ContainerStarted","Data":"515e994ed7ed84d55410946a62a82441d3159b32e8e0ba716c111cf102becdce"} Mar 20 16:26:13 crc kubenswrapper[4730]: I0320 16:26:13.958687 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"20675030-52b7-4f1d-b087-d7703a59f5e1","Type":"ContainerStarted","Data":"244a1a9da01b80a2dec1f2aad86cbb80808c91e9b5141f3019c87685f9ee1f67"} Mar 20 16:26:13 crc kubenswrapper[4730]: I0320 16:26:13.959391 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"20675030-52b7-4f1d-b087-d7703a59f5e1","Type":"ContainerStarted","Data":"98c4cc15882d4d9c4f39cab3aeaf386a68e358cfed7432474997137d22f43381"} Mar 20 16:26:13 crc kubenswrapper[4730]: I0320 16:26:13.979094 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"436a7a40-7823-4670-a107-ff5ca02da822","Type":"ContainerStarted","Data":"8d7b2fd6a6a3f93db5fcfb585196006bb7bd8578570a1903fcc7ac89695494c6"} Mar 20 16:26:13 crc kubenswrapper[4730]: I0320 16:26:13.979177 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"436a7a40-7823-4670-a107-ff5ca02da822","Type":"ContainerStarted","Data":"e94a36ea4cfee2e3e5488295127b81b7b45cd1eb7a47adee15169f50919eebf7"} Mar 20 16:26:13 crc kubenswrapper[4730]: I0320 16:26:13.987501 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"0bde1710-3861-42cb-8647-292785ee4392","Type":"ContainerStarted","Data":"055fac303ea9ea6a10e34f284632763ac00812becd9a3d2658a8c31f1e8ac202"} Mar 20 16:26:13 crc kubenswrapper[4730]: I0320 16:26:13.987558 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"0bde1710-3861-42cb-8647-292785ee4392","Type":"ContainerStarted","Data":"d813994f52ffcff4e1cd06d55a94248aab665c57861602d1352bd70e1511de1d"} Mar 20 16:26:13 crc kubenswrapper[4730]: I0320 16:26:13.999173 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-2-0" podStartSLOduration=2.520349264 podStartE2EDuration="2.999146832s" podCreationTimestamp="2026-03-20 16:26:11 +0000 UTC" firstStartedPulling="2026-03-20 16:26:12.325493126 +0000 UTC m=+2831.538864505" lastFinishedPulling="2026-03-20 16:26:12.804290704 +0000 UTC m=+2832.017662073" observedRunningTime="2026-03-20 16:26:13.98958775 +0000 UTC m=+2833.202959119" watchObservedRunningTime="2026-03-20 16:26:13.999146832 +0000 UTC m=+2833.212518201" Mar 20 16:26:14 crc kubenswrapper[4730]: I0320 16:26:14.027706 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-0" podStartSLOduration=2.471252346 podStartE2EDuration="3.027690204s" podCreationTimestamp="2026-03-20 16:26:11 +0000 UTC" firstStartedPulling="2026-03-20 16:26:12.247082504 +0000 UTC m=+2831.460453873" lastFinishedPulling="2026-03-20 16:26:12.803520362 +0000 UTC m=+2832.016891731" observedRunningTime="2026-03-20 16:26:14.024895655 +0000 UTC m=+2833.238267034" watchObservedRunningTime="2026-03-20 16:26:14.027690204 +0000 UTC m=+2833.241061573" Mar 20 16:26:14 crc kubenswrapper[4730]: I0320 16:26:14.064163 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.420906774 podStartE2EDuration="3.064141902s" podCreationTimestamp="2026-03-20 16:26:11 +0000 UTC" firstStartedPulling="2026-03-20 16:26:12.081727638 +0000 UTC m=+2831.295099007" lastFinishedPulling="2026-03-20 16:26:12.724962746 +0000 UTC m=+2831.938334135" observedRunningTime="2026-03-20 16:26:14.051378758 +0000 UTC m=+2833.264750127" watchObservedRunningTime="2026-03-20 16:26:14.064141902 +0000 UTC m=+2833.277513271" Mar 20 16:26:16 crc kubenswrapper[4730]: I0320 16:26:16.373206 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Mar 20 16:26:16 crc kubenswrapper[4730]: I0320 16:26:16.425268 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:16 crc kubenswrapper[4730]: I0320 16:26:16.550191 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:21 crc kubenswrapper[4730]: I0320 16:26:21.570375 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Mar 20 16:26:21 crc kubenswrapper[4730]: I0320 16:26:21.704261 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-0" Mar 20 16:26:21 crc kubenswrapper[4730]: I0320 16:26:21.865192 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-2-0" Mar 20 16:26:49 crc kubenswrapper[4730]: I0320 16:26:49.849962 4730 scope.go:117] "RemoveContainer" containerID="fea91f3e29a9829ef950c2b8d1b25f02c8cfb65d5e94d7557e912ffa33559bad" Mar 20 16:27:21 crc kubenswrapper[4730]: I0320 16:27:21.455916 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 16:27:21 crc kubenswrapper[4730]: I0320 16:27:21.457075 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="b9474555-d03c-4f34-8914-15b7654ec76e" containerName="prometheus" containerID="cri-o://8be7e9c76c5955e48d7fb228ba7f64fc7c79c7a945b0f730c1eb3ac871d2f2ce" gracePeriod=600 Mar 20 16:27:21 crc kubenswrapper[4730]: I0320 16:27:21.457143 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="b9474555-d03c-4f34-8914-15b7654ec76e" containerName="thanos-sidecar" containerID="cri-o://bf089e92ae421a8920cefe87896cf3bb8f1ad22d0fc9bc224fb423d5346400e7" gracePeriod=600 Mar 20 16:27:21 crc kubenswrapper[4730]: I0320 16:27:21.457150 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="b9474555-d03c-4f34-8914-15b7654ec76e" containerName="config-reloader" containerID="cri-o://df8727da1c49b4db84c764524b2f4d737c0c03808dcebb6fb23caea5272a7aec" gracePeriod=600 Mar 20 16:27:21 crc kubenswrapper[4730]: I0320 16:27:21.627051 4730 generic.go:334] "Generic (PLEG): container finished" podID="b9474555-d03c-4f34-8914-15b7654ec76e" containerID="bf089e92ae421a8920cefe87896cf3bb8f1ad22d0fc9bc224fb423d5346400e7" exitCode=0 Mar 20 16:27:21 crc kubenswrapper[4730]: I0320 16:27:21.627081 4730 generic.go:334] "Generic (PLEG): container finished" podID="b9474555-d03c-4f34-8914-15b7654ec76e" containerID="8be7e9c76c5955e48d7fb228ba7f64fc7c79c7a945b0f730c1eb3ac871d2f2ce" exitCode=0 Mar 20 16:27:21 crc kubenswrapper[4730]: I0320 16:27:21.627100 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b9474555-d03c-4f34-8914-15b7654ec76e","Type":"ContainerDied","Data":"bf089e92ae421a8920cefe87896cf3bb8f1ad22d0fc9bc224fb423d5346400e7"} Mar 20 16:27:21 crc kubenswrapper[4730]: I0320 16:27:21.627123 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b9474555-d03c-4f34-8914-15b7654ec76e","Type":"ContainerDied","Data":"8be7e9c76c5955e48d7fb228ba7f64fc7c79c7a945b0f730c1eb3ac871d2f2ce"} Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.577098 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.660455 4730 generic.go:334] "Generic (PLEG): container finished" podID="b9474555-d03c-4f34-8914-15b7654ec76e" containerID="df8727da1c49b4db84c764524b2f4d737c0c03808dcebb6fb23caea5272a7aec" exitCode=0 Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.660506 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b9474555-d03c-4f34-8914-15b7654ec76e","Type":"ContainerDied","Data":"df8727da1c49b4db84c764524b2f4d737c0c03808dcebb6fb23caea5272a7aec"} Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.660538 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b9474555-d03c-4f34-8914-15b7654ec76e","Type":"ContainerDied","Data":"8f7081ac79f5f8ab5d11083740ef5a60bf4e5c0ec09313ba816c9290b7a2077b"} Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.660557 4730 scope.go:117] "RemoveContainer" containerID="bf089e92ae421a8920cefe87896cf3bb8f1ad22d0fc9bc224fb423d5346400e7" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.660722 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.681168 4730 scope.go:117] "RemoveContainer" containerID="df8727da1c49b4db84c764524b2f4d737c0c03808dcebb6fb23caea5272a7aec" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.700064 4730 scope.go:117] "RemoveContainer" containerID="8be7e9c76c5955e48d7fb228ba7f64fc7c79c7a945b0f730c1eb3ac871d2f2ce" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.728770 4730 scope.go:117] "RemoveContainer" containerID="c6f36cf8613ae0c9fc8870f685a85cb84e12a16bcd52f187f659c8895c86bf85" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.731823 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"b9474555-d03c-4f34-8914-15b7654ec76e\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.731881 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"b9474555-d03c-4f34-8914-15b7654ec76e\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.731971 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b9474555-d03c-4f34-8914-15b7654ec76e-config-out\") pod \"b9474555-d03c-4f34-8914-15b7654ec76e\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.732007 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-secret-combined-ca-bundle\") pod \"b9474555-d03c-4f34-8914-15b7654ec76e\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.732039 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-thanos-prometheus-http-client-file\") pod \"b9474555-d03c-4f34-8914-15b7654ec76e\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.732085 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-config\") pod \"b9474555-d03c-4f34-8914-15b7654ec76e\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.732226 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/b9474555-d03c-4f34-8914-15b7654ec76e-prometheus-metric-storage-rulefiles-2\") pod \"b9474555-d03c-4f34-8914-15b7654ec76e\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.732342 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b9474555-d03c-4f34-8914-15b7654ec76e-tls-assets\") pod \"b9474555-d03c-4f34-8914-15b7654ec76e\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.732367 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-web-config\") pod \"b9474555-d03c-4f34-8914-15b7654ec76e\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.732470 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/b9474555-d03c-4f34-8914-15b7654ec76e-prometheus-metric-storage-rulefiles-1\") pod \"b9474555-d03c-4f34-8914-15b7654ec76e\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.732501 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55vpw\" (UniqueName: \"kubernetes.io/projected/b9474555-d03c-4f34-8914-15b7654ec76e-kube-api-access-55vpw\") pod \"b9474555-d03c-4f34-8914-15b7654ec76e\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.732600 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\") pod \"b9474555-d03c-4f34-8914-15b7654ec76e\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.732645 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b9474555-d03c-4f34-8914-15b7654ec76e-prometheus-metric-storage-rulefiles-0\") pod \"b9474555-d03c-4f34-8914-15b7654ec76e\" (UID: \"b9474555-d03c-4f34-8914-15b7654ec76e\") " Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.733432 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9474555-d03c-4f34-8914-15b7654ec76e-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "b9474555-d03c-4f34-8914-15b7654ec76e" (UID: "b9474555-d03c-4f34-8914-15b7654ec76e"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.734375 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9474555-d03c-4f34-8914-15b7654ec76e-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "b9474555-d03c-4f34-8914-15b7654ec76e" (UID: "b9474555-d03c-4f34-8914-15b7654ec76e"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.734664 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9474555-d03c-4f34-8914-15b7654ec76e-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "b9474555-d03c-4f34-8914-15b7654ec76e" (UID: "b9474555-d03c-4f34-8914-15b7654ec76e"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.739130 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "b9474555-d03c-4f34-8914-15b7654ec76e" (UID: "b9474555-d03c-4f34-8914-15b7654ec76e"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.739546 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9474555-d03c-4f34-8914-15b7654ec76e-kube-api-access-55vpw" (OuterVolumeSpecName: "kube-api-access-55vpw") pod "b9474555-d03c-4f34-8914-15b7654ec76e" (UID: "b9474555-d03c-4f34-8914-15b7654ec76e"). InnerVolumeSpecName "kube-api-access-55vpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.739693 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-config" (OuterVolumeSpecName: "config") pod "b9474555-d03c-4f34-8914-15b7654ec76e" (UID: "b9474555-d03c-4f34-8914-15b7654ec76e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.739720 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "b9474555-d03c-4f34-8914-15b7654ec76e" (UID: "b9474555-d03c-4f34-8914-15b7654ec76e"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.739992 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "b9474555-d03c-4f34-8914-15b7654ec76e" (UID: "b9474555-d03c-4f34-8914-15b7654ec76e"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.745525 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "b9474555-d03c-4f34-8914-15b7654ec76e" (UID: "b9474555-d03c-4f34-8914-15b7654ec76e"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.745695 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9474555-d03c-4f34-8914-15b7654ec76e-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "b9474555-d03c-4f34-8914-15b7654ec76e" (UID: "b9474555-d03c-4f34-8914-15b7654ec76e"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.752169 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9474555-d03c-4f34-8914-15b7654ec76e-config-out" (OuterVolumeSpecName: "config-out") pod "b9474555-d03c-4f34-8914-15b7654ec76e" (UID: "b9474555-d03c-4f34-8914-15b7654ec76e"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.758012 4730 scope.go:117] "RemoveContainer" containerID="bf089e92ae421a8920cefe87896cf3bb8f1ad22d0fc9bc224fb423d5346400e7" Mar 20 16:27:22 crc kubenswrapper[4730]: E0320 16:27:22.758813 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf089e92ae421a8920cefe87896cf3bb8f1ad22d0fc9bc224fb423d5346400e7\": container with ID starting with bf089e92ae421a8920cefe87896cf3bb8f1ad22d0fc9bc224fb423d5346400e7 not found: ID does not exist" containerID="bf089e92ae421a8920cefe87896cf3bb8f1ad22d0fc9bc224fb423d5346400e7" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.758850 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf089e92ae421a8920cefe87896cf3bb8f1ad22d0fc9bc224fb423d5346400e7"} err="failed to get container status \"bf089e92ae421a8920cefe87896cf3bb8f1ad22d0fc9bc224fb423d5346400e7\": rpc error: code = NotFound desc = could not find container \"bf089e92ae421a8920cefe87896cf3bb8f1ad22d0fc9bc224fb423d5346400e7\": container with ID starting with bf089e92ae421a8920cefe87896cf3bb8f1ad22d0fc9bc224fb423d5346400e7 not found: ID does not exist" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.758875 4730 scope.go:117] "RemoveContainer" containerID="df8727da1c49b4db84c764524b2f4d737c0c03808dcebb6fb23caea5272a7aec" Mar 20 16:27:22 crc kubenswrapper[4730]: E0320 16:27:22.759258 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df8727da1c49b4db84c764524b2f4d737c0c03808dcebb6fb23caea5272a7aec\": container with ID starting with df8727da1c49b4db84c764524b2f4d737c0c03808dcebb6fb23caea5272a7aec not found: ID does not exist" containerID="df8727da1c49b4db84c764524b2f4d737c0c03808dcebb6fb23caea5272a7aec" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.759278 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df8727da1c49b4db84c764524b2f4d737c0c03808dcebb6fb23caea5272a7aec"} err="failed to get container status \"df8727da1c49b4db84c764524b2f4d737c0c03808dcebb6fb23caea5272a7aec\": rpc error: code = NotFound desc = could not find container \"df8727da1c49b4db84c764524b2f4d737c0c03808dcebb6fb23caea5272a7aec\": container with ID starting with df8727da1c49b4db84c764524b2f4d737c0c03808dcebb6fb23caea5272a7aec not found: ID does not exist" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.759292 4730 scope.go:117] "RemoveContainer" containerID="8be7e9c76c5955e48d7fb228ba7f64fc7c79c7a945b0f730c1eb3ac871d2f2ce" Mar 20 16:27:22 crc kubenswrapper[4730]: E0320 16:27:22.759525 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8be7e9c76c5955e48d7fb228ba7f64fc7c79c7a945b0f730c1eb3ac871d2f2ce\": container with ID starting with 8be7e9c76c5955e48d7fb228ba7f64fc7c79c7a945b0f730c1eb3ac871d2f2ce not found: ID does not exist" containerID="8be7e9c76c5955e48d7fb228ba7f64fc7c79c7a945b0f730c1eb3ac871d2f2ce" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.759579 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8be7e9c76c5955e48d7fb228ba7f64fc7c79c7a945b0f730c1eb3ac871d2f2ce"} err="failed to get container status \"8be7e9c76c5955e48d7fb228ba7f64fc7c79c7a945b0f730c1eb3ac871d2f2ce\": rpc error: code = NotFound desc = could not find container \"8be7e9c76c5955e48d7fb228ba7f64fc7c79c7a945b0f730c1eb3ac871d2f2ce\": container with ID starting with 8be7e9c76c5955e48d7fb228ba7f64fc7c79c7a945b0f730c1eb3ac871d2f2ce not found: ID does not exist" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.759599 4730 scope.go:117] "RemoveContainer" containerID="c6f36cf8613ae0c9fc8870f685a85cb84e12a16bcd52f187f659c8895c86bf85" Mar 20 16:27:22 crc kubenswrapper[4730]: E0320 16:27:22.759816 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6f36cf8613ae0c9fc8870f685a85cb84e12a16bcd52f187f659c8895c86bf85\": container with ID starting with c6f36cf8613ae0c9fc8870f685a85cb84e12a16bcd52f187f659c8895c86bf85 not found: ID does not exist" containerID="c6f36cf8613ae0c9fc8870f685a85cb84e12a16bcd52f187f659c8895c86bf85" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.759831 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6f36cf8613ae0c9fc8870f685a85cb84e12a16bcd52f187f659c8895c86bf85"} err="failed to get container status \"c6f36cf8613ae0c9fc8870f685a85cb84e12a16bcd52f187f659c8895c86bf85\": rpc error: code = NotFound desc = could not find container \"c6f36cf8613ae0c9fc8870f685a85cb84e12a16bcd52f187f659c8895c86bf85\": container with ID starting with c6f36cf8613ae0c9fc8870f685a85cb84e12a16bcd52f187f659c8895c86bf85 not found: ID does not exist" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.761618 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "b9474555-d03c-4f34-8914-15b7654ec76e" (UID: "b9474555-d03c-4f34-8914-15b7654ec76e"). InnerVolumeSpecName "pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.826156 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-web-config" (OuterVolumeSpecName: "web-config") pod "b9474555-d03c-4f34-8914-15b7654ec76e" (UID: "b9474555-d03c-4f34-8914-15b7654ec76e"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.835795 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55vpw\" (UniqueName: \"kubernetes.io/projected/b9474555-d03c-4f34-8914-15b7654ec76e-kube-api-access-55vpw\") on node \"crc\" DevicePath \"\"" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.835855 4730 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\") on node \"crc\" " Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.835871 4730 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b9474555-d03c-4f34-8914-15b7654ec76e-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.835887 4730 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.835899 4730 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.835913 4730 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b9474555-d03c-4f34-8914-15b7654ec76e-config-out\") on node \"crc\" DevicePath \"\"" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.835924 4730 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.835935 4730 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.835946 4730 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-config\") on node \"crc\" DevicePath \"\"" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.835957 4730 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/b9474555-d03c-4f34-8914-15b7654ec76e-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.835967 4730 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b9474555-d03c-4f34-8914-15b7654ec76e-tls-assets\") on node \"crc\" DevicePath \"\"" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.835976 4730 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b9474555-d03c-4f34-8914-15b7654ec76e-web-config\") on node \"crc\" DevicePath \"\"" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.835987 4730 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/b9474555-d03c-4f34-8914-15b7654ec76e-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.875144 4730 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.875300 4730 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d") on node "crc" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.937397 4730 reconciler_common.go:293] "Volume detached for volume \"pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\") on node \"crc\" DevicePath \"\"" Mar 20 16:27:22 crc kubenswrapper[4730]: I0320 16:27:22.998466 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.015216 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.040413 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 16:27:23 crc kubenswrapper[4730]: E0320 16:27:23.040875 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9474555-d03c-4f34-8914-15b7654ec76e" containerName="init-config-reloader" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.040902 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9474555-d03c-4f34-8914-15b7654ec76e" containerName="init-config-reloader" Mar 20 16:27:23 crc kubenswrapper[4730]: E0320 16:27:23.040936 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9474555-d03c-4f34-8914-15b7654ec76e" containerName="config-reloader" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.040945 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9474555-d03c-4f34-8914-15b7654ec76e" containerName="config-reloader" Mar 20 16:27:23 crc kubenswrapper[4730]: E0320 16:27:23.040975 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9474555-d03c-4f34-8914-15b7654ec76e" containerName="prometheus" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.040983 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9474555-d03c-4f34-8914-15b7654ec76e" containerName="prometheus" Mar 20 16:27:23 crc kubenswrapper[4730]: E0320 16:27:23.041000 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9474555-d03c-4f34-8914-15b7654ec76e" containerName="thanos-sidecar" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.041008 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9474555-d03c-4f34-8914-15b7654ec76e" containerName="thanos-sidecar" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.041267 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9474555-d03c-4f34-8914-15b7654ec76e" containerName="thanos-sidecar" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.041287 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9474555-d03c-4f34-8914-15b7654ec76e" containerName="prometheus" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.041299 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9474555-d03c-4f34-8914-15b7654ec76e" containerName="config-reloader" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.043613 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.045672 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.045805 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.046145 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.046323 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.046436 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.046939 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.047734 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-2q5k6" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.054332 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.061854 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.141091 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/457a736d-6c3f-486d-b8d1-fef19df33e26-config\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.141141 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/457a736d-6c3f-486d-b8d1-fef19df33e26-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.141175 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/457a736d-6c3f-486d-b8d1-fef19df33e26-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.141196 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/457a736d-6c3f-486d-b8d1-fef19df33e26-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.141233 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/457a736d-6c3f-486d-b8d1-fef19df33e26-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.141492 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/457a736d-6c3f-486d-b8d1-fef19df33e26-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.141559 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.141766 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/457a736d-6c3f-486d-b8d1-fef19df33e26-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.141845 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/457a736d-6c3f-486d-b8d1-fef19df33e26-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.141905 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/457a736d-6c3f-486d-b8d1-fef19df33e26-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.141947 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/457a736d-6c3f-486d-b8d1-fef19df33e26-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.141969 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2pth\" (UniqueName: \"kubernetes.io/projected/457a736d-6c3f-486d-b8d1-fef19df33e26-kube-api-access-s2pth\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.141995 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/457a736d-6c3f-486d-b8d1-fef19df33e26-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.243876 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2pth\" (UniqueName: \"kubernetes.io/projected/457a736d-6c3f-486d-b8d1-fef19df33e26-kube-api-access-s2pth\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.243928 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/457a736d-6c3f-486d-b8d1-fef19df33e26-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.243976 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/457a736d-6c3f-486d-b8d1-fef19df33e26-config\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.243997 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/457a736d-6c3f-486d-b8d1-fef19df33e26-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.244041 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/457a736d-6c3f-486d-b8d1-fef19df33e26-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.244056 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/457a736d-6c3f-486d-b8d1-fef19df33e26-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.244095 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/457a736d-6c3f-486d-b8d1-fef19df33e26-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.244138 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/457a736d-6c3f-486d-b8d1-fef19df33e26-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.244157 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.244219 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/457a736d-6c3f-486d-b8d1-fef19df33e26-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.244260 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/457a736d-6c3f-486d-b8d1-fef19df33e26-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.244297 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/457a736d-6c3f-486d-b8d1-fef19df33e26-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.244326 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/457a736d-6c3f-486d-b8d1-fef19df33e26-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.245283 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/457a736d-6c3f-486d-b8d1-fef19df33e26-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.245833 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/457a736d-6c3f-486d-b8d1-fef19df33e26-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.246097 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/457a736d-6c3f-486d-b8d1-fef19df33e26-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.248798 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/457a736d-6c3f-486d-b8d1-fef19df33e26-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.249442 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/457a736d-6c3f-486d-b8d1-fef19df33e26-config\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.249816 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/457a736d-6c3f-486d-b8d1-fef19df33e26-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.251104 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/457a736d-6c3f-486d-b8d1-fef19df33e26-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.251508 4730 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.251589 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6c50c5c57c27fdb24da1fcbf3a7504c7bda45f4dc15a5678e0deb708aa433733/globalmount\"" pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.252485 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/457a736d-6c3f-486d-b8d1-fef19df33e26-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.255992 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/457a736d-6c3f-486d-b8d1-fef19df33e26-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.257002 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/457a736d-6c3f-486d-b8d1-fef19df33e26-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.264235 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2pth\" (UniqueName: \"kubernetes.io/projected/457a736d-6c3f-486d-b8d1-fef19df33e26-kube-api-access-s2pth\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.264397 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/457a736d-6c3f-486d-b8d1-fef19df33e26-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.297362 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-361883a8-e26d-4e93-ae2e-72ea2afef68d\") pod \"prometheus-metric-storage-0\" (UID: \"457a736d-6c3f-486d-b8d1-fef19df33e26\") " pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.372783 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.547283 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9474555-d03c-4f34-8914-15b7654ec76e" path="/var/lib/kubelet/pods/b9474555-d03c-4f34-8914-15b7654ec76e/volumes" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.805210 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tbwsz"] Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.808461 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tbwsz" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.817768 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tbwsz"] Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.897168 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.963442 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kv8w\" (UniqueName: \"kubernetes.io/projected/d4b708fa-9cfd-4986-b8cf-829083a898dc-kube-api-access-6kv8w\") pod \"redhat-operators-tbwsz\" (UID: \"d4b708fa-9cfd-4986-b8cf-829083a898dc\") " pod="openshift-marketplace/redhat-operators-tbwsz" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.963515 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b708fa-9cfd-4986-b8cf-829083a898dc-catalog-content\") pod \"redhat-operators-tbwsz\" (UID: \"d4b708fa-9cfd-4986-b8cf-829083a898dc\") " pod="openshift-marketplace/redhat-operators-tbwsz" Mar 20 16:27:23 crc kubenswrapper[4730]: I0320 16:27:23.963584 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b708fa-9cfd-4986-b8cf-829083a898dc-utilities\") pod \"redhat-operators-tbwsz\" (UID: \"d4b708fa-9cfd-4986-b8cf-829083a898dc\") " pod="openshift-marketplace/redhat-operators-tbwsz" Mar 20 16:27:24 crc kubenswrapper[4730]: I0320 16:27:24.065987 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kv8w\" (UniqueName: \"kubernetes.io/projected/d4b708fa-9cfd-4986-b8cf-829083a898dc-kube-api-access-6kv8w\") pod \"redhat-operators-tbwsz\" (UID: \"d4b708fa-9cfd-4986-b8cf-829083a898dc\") " pod="openshift-marketplace/redhat-operators-tbwsz" Mar 20 16:27:24 crc kubenswrapper[4730]: I0320 16:27:24.066384 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b708fa-9cfd-4986-b8cf-829083a898dc-catalog-content\") pod \"redhat-operators-tbwsz\" (UID: \"d4b708fa-9cfd-4986-b8cf-829083a898dc\") " pod="openshift-marketplace/redhat-operators-tbwsz" Mar 20 16:27:24 crc kubenswrapper[4730]: I0320 16:27:24.066458 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b708fa-9cfd-4986-b8cf-829083a898dc-utilities\") pod \"redhat-operators-tbwsz\" (UID: \"d4b708fa-9cfd-4986-b8cf-829083a898dc\") " pod="openshift-marketplace/redhat-operators-tbwsz" Mar 20 16:27:24 crc kubenswrapper[4730]: I0320 16:27:24.066981 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b708fa-9cfd-4986-b8cf-829083a898dc-catalog-content\") pod \"redhat-operators-tbwsz\" (UID: \"d4b708fa-9cfd-4986-b8cf-829083a898dc\") " pod="openshift-marketplace/redhat-operators-tbwsz" Mar 20 16:27:24 crc kubenswrapper[4730]: I0320 16:27:24.067040 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b708fa-9cfd-4986-b8cf-829083a898dc-utilities\") pod \"redhat-operators-tbwsz\" (UID: \"d4b708fa-9cfd-4986-b8cf-829083a898dc\") " pod="openshift-marketplace/redhat-operators-tbwsz" Mar 20 16:27:24 crc kubenswrapper[4730]: I0320 16:27:24.090842 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kv8w\" (UniqueName: \"kubernetes.io/projected/d4b708fa-9cfd-4986-b8cf-829083a898dc-kube-api-access-6kv8w\") pod \"redhat-operators-tbwsz\" (UID: \"d4b708fa-9cfd-4986-b8cf-829083a898dc\") " pod="openshift-marketplace/redhat-operators-tbwsz" Mar 20 16:27:24 crc kubenswrapper[4730]: I0320 16:27:24.160001 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tbwsz" Mar 20 16:27:24 crc kubenswrapper[4730]: I0320 16:27:24.673129 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tbwsz"] Mar 20 16:27:24 crc kubenswrapper[4730]: I0320 16:27:24.685310 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"457a736d-6c3f-486d-b8d1-fef19df33e26","Type":"ContainerStarted","Data":"a08b055a42b0787d64934a875d37cdeff6d784dccb9876ce7c9cf3a5cf1d37c4"} Mar 20 16:27:24 crc kubenswrapper[4730]: W0320 16:27:24.687427 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4b708fa_9cfd_4986_b8cf_829083a898dc.slice/crio-bd691c4e3989104a40a71572866dcc51a30677ea52b34eea58c7ab44f894bd33 WatchSource:0}: Error finding container bd691c4e3989104a40a71572866dcc51a30677ea52b34eea58c7ab44f894bd33: Status 404 returned error can't find the container with id bd691c4e3989104a40a71572866dcc51a30677ea52b34eea58c7ab44f894bd33 Mar 20 16:27:25 crc kubenswrapper[4730]: I0320 16:27:25.693706 4730 generic.go:334] "Generic (PLEG): container finished" podID="d4b708fa-9cfd-4986-b8cf-829083a898dc" containerID="3f7a8e2bcf60a0b2f78a1591703724fb4e894f9989953a89c4c5bdecd0d31e6b" exitCode=0 Mar 20 16:27:25 crc kubenswrapper[4730]: I0320 16:27:25.693804 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbwsz" event={"ID":"d4b708fa-9cfd-4986-b8cf-829083a898dc","Type":"ContainerDied","Data":"3f7a8e2bcf60a0b2f78a1591703724fb4e894f9989953a89c4c5bdecd0d31e6b"} Mar 20 16:27:25 crc kubenswrapper[4730]: I0320 16:27:25.694363 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbwsz" event={"ID":"d4b708fa-9cfd-4986-b8cf-829083a898dc","Type":"ContainerStarted","Data":"bd691c4e3989104a40a71572866dcc51a30677ea52b34eea58c7ab44f894bd33"} Mar 20 16:27:25 crc kubenswrapper[4730]: I0320 16:27:25.696417 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 16:27:27 crc kubenswrapper[4730]: I0320 16:27:27.714701 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"457a736d-6c3f-486d-b8d1-fef19df33e26","Type":"ContainerStarted","Data":"8484dd0c7c8c11489d2bf88865470823c2eca98ee8d5c33a4ca245b45d333516"} Mar 20 16:27:28 crc kubenswrapper[4730]: I0320 16:27:28.726750 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbwsz" event={"ID":"d4b708fa-9cfd-4986-b8cf-829083a898dc","Type":"ContainerStarted","Data":"2c7c685807feee902d4eb4b91c7981a5208919ad906e30a3ca544ea7bc705a9f"} Mar 20 16:27:33 crc kubenswrapper[4730]: I0320 16:27:33.769293 4730 generic.go:334] "Generic (PLEG): container finished" podID="457a736d-6c3f-486d-b8d1-fef19df33e26" containerID="8484dd0c7c8c11489d2bf88865470823c2eca98ee8d5c33a4ca245b45d333516" exitCode=0 Mar 20 16:27:33 crc kubenswrapper[4730]: I0320 16:27:33.769374 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"457a736d-6c3f-486d-b8d1-fef19df33e26","Type":"ContainerDied","Data":"8484dd0c7c8c11489d2bf88865470823c2eca98ee8d5c33a4ca245b45d333516"} Mar 20 16:27:34 crc kubenswrapper[4730]: I0320 16:27:34.781081 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"457a736d-6c3f-486d-b8d1-fef19df33e26","Type":"ContainerStarted","Data":"1d85169860587c3477dc1f6ba3a5983294271f2fa8771c2709c4cb28da928e51"} Mar 20 16:27:34 crc kubenswrapper[4730]: I0320 16:27:34.783223 4730 generic.go:334] "Generic (PLEG): container finished" podID="d4b708fa-9cfd-4986-b8cf-829083a898dc" containerID="2c7c685807feee902d4eb4b91c7981a5208919ad906e30a3ca544ea7bc705a9f" exitCode=0 Mar 20 16:27:34 crc kubenswrapper[4730]: I0320 16:27:34.783365 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbwsz" event={"ID":"d4b708fa-9cfd-4986-b8cf-829083a898dc","Type":"ContainerDied","Data":"2c7c685807feee902d4eb4b91c7981a5208919ad906e30a3ca544ea7bc705a9f"} Mar 20 16:27:35 crc kubenswrapper[4730]: I0320 16:27:35.794840 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbwsz" event={"ID":"d4b708fa-9cfd-4986-b8cf-829083a898dc","Type":"ContainerStarted","Data":"761a57e8015a72556b7fa86e30ae3e038c8a644cf3b8bde8484efaceede61fb9"} Mar 20 16:27:35 crc kubenswrapper[4730]: I0320 16:27:35.823675 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tbwsz" podStartSLOduration=3.175471431 podStartE2EDuration="12.823658508s" podCreationTimestamp="2026-03-20 16:27:23 +0000 UTC" firstStartedPulling="2026-03-20 16:27:25.696040748 +0000 UTC m=+2904.909412117" lastFinishedPulling="2026-03-20 16:27:35.344227825 +0000 UTC m=+2914.557599194" observedRunningTime="2026-03-20 16:27:35.815442647 +0000 UTC m=+2915.028814016" watchObservedRunningTime="2026-03-20 16:27:35.823658508 +0000 UTC m=+2915.037029877" Mar 20 16:27:37 crc kubenswrapper[4730]: I0320 16:27:37.815224 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"457a736d-6c3f-486d-b8d1-fef19df33e26","Type":"ContainerStarted","Data":"e9ab1e53399e9e0391e9713bfe1c8e9c80955463e25e0a964cade0038151bd75"} Mar 20 16:27:37 crc kubenswrapper[4730]: I0320 16:27:37.815807 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"457a736d-6c3f-486d-b8d1-fef19df33e26","Type":"ContainerStarted","Data":"9c54daf456b96e86d6df7d4ffbf0e8ecc46f035e8d4101ea255939449f22bdd7"} Mar 20 16:27:37 crc kubenswrapper[4730]: I0320 16:27:37.861642 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=14.861619206 podStartE2EDuration="14.861619206s" podCreationTimestamp="2026-03-20 16:27:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:27:37.854139176 +0000 UTC m=+2917.067510555" watchObservedRunningTime="2026-03-20 16:27:37.861619206 +0000 UTC m=+2917.074990575" Mar 20 16:27:38 crc kubenswrapper[4730]: I0320 16:27:38.373095 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:38 crc kubenswrapper[4730]: I0320 16:27:38.373275 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:38 crc kubenswrapper[4730]: I0320 16:27:38.379992 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:38 crc kubenswrapper[4730]: I0320 16:27:38.830528 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 20 16:27:44 crc kubenswrapper[4730]: I0320 16:27:44.160384 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tbwsz" Mar 20 16:27:44 crc kubenswrapper[4730]: I0320 16:27:44.160950 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tbwsz" Mar 20 16:27:45 crc kubenswrapper[4730]: I0320 16:27:45.231930 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tbwsz" podUID="d4b708fa-9cfd-4986-b8cf-829083a898dc" containerName="registry-server" probeResult="failure" output=< Mar 20 16:27:45 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Mar 20 16:27:45 crc kubenswrapper[4730]: > Mar 20 16:27:55 crc kubenswrapper[4730]: I0320 16:27:55.239551 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tbwsz" podUID="d4b708fa-9cfd-4986-b8cf-829083a898dc" containerName="registry-server" probeResult="failure" output=< Mar 20 16:27:55 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Mar 20 16:27:55 crc kubenswrapper[4730]: > Mar 20 16:28:00 crc kubenswrapper[4730]: I0320 16:28:00.145096 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567068-d884x"] Mar 20 16:28:00 crc kubenswrapper[4730]: I0320 16:28:00.147110 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567068-d884x" Mar 20 16:28:00 crc kubenswrapper[4730]: I0320 16:28:00.153496 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:28:00 crc kubenswrapper[4730]: I0320 16:28:00.153889 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:28:00 crc kubenswrapper[4730]: I0320 16:28:00.154054 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:28:00 crc kubenswrapper[4730]: I0320 16:28:00.155732 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567068-d884x"] Mar 20 16:28:00 crc kubenswrapper[4730]: I0320 16:28:00.244980 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8sqp\" (UniqueName: \"kubernetes.io/projected/02177dc8-25be-4462-afba-d87fda4396c6-kube-api-access-k8sqp\") pod \"auto-csr-approver-29567068-d884x\" (UID: \"02177dc8-25be-4462-afba-d87fda4396c6\") " pod="openshift-infra/auto-csr-approver-29567068-d884x" Mar 20 16:28:00 crc kubenswrapper[4730]: I0320 16:28:00.347309 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8sqp\" (UniqueName: \"kubernetes.io/projected/02177dc8-25be-4462-afba-d87fda4396c6-kube-api-access-k8sqp\") pod \"auto-csr-approver-29567068-d884x\" (UID: \"02177dc8-25be-4462-afba-d87fda4396c6\") " pod="openshift-infra/auto-csr-approver-29567068-d884x" Mar 20 16:28:00 crc kubenswrapper[4730]: I0320 16:28:00.367320 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8sqp\" (UniqueName: \"kubernetes.io/projected/02177dc8-25be-4462-afba-d87fda4396c6-kube-api-access-k8sqp\") pod \"auto-csr-approver-29567068-d884x\" (UID: \"02177dc8-25be-4462-afba-d87fda4396c6\") " pod="openshift-infra/auto-csr-approver-29567068-d884x" Mar 20 16:28:00 crc kubenswrapper[4730]: I0320 16:28:00.469339 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567068-d884x" Mar 20 16:28:00 crc kubenswrapper[4730]: I0320 16:28:00.909504 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567068-d884x"] Mar 20 16:28:01 crc kubenswrapper[4730]: I0320 16:28:01.210163 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567068-d884x" event={"ID":"02177dc8-25be-4462-afba-d87fda4396c6","Type":"ContainerStarted","Data":"a423e995336389e4a72bf13a9fbebe0ca777c1c9a74473ff5376c942e4c6d269"} Mar 20 16:28:03 crc kubenswrapper[4730]: I0320 16:28:03.227767 4730 generic.go:334] "Generic (PLEG): container finished" podID="02177dc8-25be-4462-afba-d87fda4396c6" containerID="82eb38bbfad57d58cc830a2f14037d88d822b6b89fbb3a03b36b1b472f369ed1" exitCode=0 Mar 20 16:28:03 crc kubenswrapper[4730]: I0320 16:28:03.227824 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567068-d884x" event={"ID":"02177dc8-25be-4462-afba-d87fda4396c6","Type":"ContainerDied","Data":"82eb38bbfad57d58cc830a2f14037d88d822b6b89fbb3a03b36b1b472f369ed1"} Mar 20 16:28:04 crc kubenswrapper[4730]: I0320 16:28:04.582712 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567068-d884x" Mar 20 16:28:04 crc kubenswrapper[4730]: I0320 16:28:04.640537 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8sqp\" (UniqueName: \"kubernetes.io/projected/02177dc8-25be-4462-afba-d87fda4396c6-kube-api-access-k8sqp\") pod \"02177dc8-25be-4462-afba-d87fda4396c6\" (UID: \"02177dc8-25be-4462-afba-d87fda4396c6\") " Mar 20 16:28:04 crc kubenswrapper[4730]: I0320 16:28:04.656295 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02177dc8-25be-4462-afba-d87fda4396c6-kube-api-access-k8sqp" (OuterVolumeSpecName: "kube-api-access-k8sqp") pod "02177dc8-25be-4462-afba-d87fda4396c6" (UID: "02177dc8-25be-4462-afba-d87fda4396c6"). InnerVolumeSpecName "kube-api-access-k8sqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:28:04 crc kubenswrapper[4730]: I0320 16:28:04.743556 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8sqp\" (UniqueName: \"kubernetes.io/projected/02177dc8-25be-4462-afba-d87fda4396c6-kube-api-access-k8sqp\") on node \"crc\" DevicePath \"\"" Mar 20 16:28:05 crc kubenswrapper[4730]: I0320 16:28:05.203349 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tbwsz" podUID="d4b708fa-9cfd-4986-b8cf-829083a898dc" containerName="registry-server" probeResult="failure" output=< Mar 20 16:28:05 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Mar 20 16:28:05 crc kubenswrapper[4730]: > Mar 20 16:28:05 crc kubenswrapper[4730]: I0320 16:28:05.250526 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567068-d884x" event={"ID":"02177dc8-25be-4462-afba-d87fda4396c6","Type":"ContainerDied","Data":"a423e995336389e4a72bf13a9fbebe0ca777c1c9a74473ff5376c942e4c6d269"} Mar 20 16:28:05 crc kubenswrapper[4730]: I0320 16:28:05.250562 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a423e995336389e4a72bf13a9fbebe0ca777c1c9a74473ff5376c942e4c6d269" Mar 20 16:28:05 crc kubenswrapper[4730]: I0320 16:28:05.250625 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567068-d884x" Mar 20 16:28:05 crc kubenswrapper[4730]: I0320 16:28:05.665713 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567062-58ww8"] Mar 20 16:28:05 crc kubenswrapper[4730]: I0320 16:28:05.674498 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567062-58ww8"] Mar 20 16:28:07 crc kubenswrapper[4730]: I0320 16:28:07.550904 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e053a518-d8c9-42a1-9c8d-83d5fec8de8c" path="/var/lib/kubelet/pods/e053a518-d8c9-42a1-9c8d-83d5fec8de8c/volumes" Mar 20 16:28:12 crc kubenswrapper[4730]: I0320 16:28:12.880800 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:28:12 crc kubenswrapper[4730]: I0320 16:28:12.881573 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:28:15 crc kubenswrapper[4730]: I0320 16:28:15.205992 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tbwsz" podUID="d4b708fa-9cfd-4986-b8cf-829083a898dc" containerName="registry-server" probeResult="failure" output=< Mar 20 16:28:15 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Mar 20 16:28:15 crc kubenswrapper[4730]: > Mar 20 16:28:17 crc kubenswrapper[4730]: I0320 16:28:17.866232 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Mar 20 16:28:17 crc kubenswrapper[4730]: E0320 16:28:17.867082 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02177dc8-25be-4462-afba-d87fda4396c6" containerName="oc" Mar 20 16:28:17 crc kubenswrapper[4730]: I0320 16:28:17.867099 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="02177dc8-25be-4462-afba-d87fda4396c6" containerName="oc" Mar 20 16:28:17 crc kubenswrapper[4730]: I0320 16:28:17.867341 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="02177dc8-25be-4462-afba-d87fda4396c6" containerName="oc" Mar 20 16:28:17 crc kubenswrapper[4730]: I0320 16:28:17.868152 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 20 16:28:17 crc kubenswrapper[4730]: I0320 16:28:17.871741 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Mar 20 16:28:17 crc kubenswrapper[4730]: I0320 16:28:17.871917 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Mar 20 16:28:17 crc kubenswrapper[4730]: I0320 16:28:17.872993 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Mar 20 16:28:17 crc kubenswrapper[4730]: I0320 16:28:17.873320 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-gh48g" Mar 20 16:28:17 crc kubenswrapper[4730]: I0320 16:28:17.882516 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Mar 20 16:28:17 crc kubenswrapper[4730]: I0320 16:28:17.926563 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75t8r\" (UniqueName: \"kubernetes.io/projected/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-kube-api-access-75t8r\") pod \"tempest-tests-tempest\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " pod="openstack/tempest-tests-tempest" Mar 20 16:28:17 crc kubenswrapper[4730]: I0320 16:28:17.926622 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " pod="openstack/tempest-tests-tempest" Mar 20 16:28:17 crc kubenswrapper[4730]: I0320 16:28:17.926684 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " pod="openstack/tempest-tests-tempest" Mar 20 16:28:17 crc kubenswrapper[4730]: I0320 16:28:17.926712 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " pod="openstack/tempest-tests-tempest" Mar 20 16:28:17 crc kubenswrapper[4730]: I0320 16:28:17.926750 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-config-data\") pod \"tempest-tests-tempest\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " pod="openstack/tempest-tests-tempest" Mar 20 16:28:17 crc kubenswrapper[4730]: I0320 16:28:17.926769 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " pod="openstack/tempest-tests-tempest" Mar 20 16:28:17 crc kubenswrapper[4730]: I0320 16:28:17.926796 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " pod="openstack/tempest-tests-tempest" Mar 20 16:28:17 crc kubenswrapper[4730]: I0320 16:28:17.926889 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " pod="openstack/tempest-tests-tempest" Mar 20 16:28:17 crc kubenswrapper[4730]: I0320 16:28:17.926923 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " pod="openstack/tempest-tests-tempest" Mar 20 16:28:18 crc kubenswrapper[4730]: I0320 16:28:18.029029 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " pod="openstack/tempest-tests-tempest" Mar 20 16:28:18 crc kubenswrapper[4730]: I0320 16:28:18.029101 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-config-data\") pod \"tempest-tests-tempest\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " pod="openstack/tempest-tests-tempest" Mar 20 16:28:18 crc kubenswrapper[4730]: I0320 16:28:18.029123 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " pod="openstack/tempest-tests-tempest" Mar 20 16:28:18 crc kubenswrapper[4730]: I0320 16:28:18.029150 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " pod="openstack/tempest-tests-tempest" Mar 20 16:28:18 crc kubenswrapper[4730]: I0320 16:28:18.029216 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " pod="openstack/tempest-tests-tempest" Mar 20 16:28:18 crc kubenswrapper[4730]: I0320 16:28:18.029263 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " pod="openstack/tempest-tests-tempest" Mar 20 16:28:18 crc kubenswrapper[4730]: I0320 16:28:18.029318 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75t8r\" (UniqueName: \"kubernetes.io/projected/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-kube-api-access-75t8r\") pod \"tempest-tests-tempest\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " pod="openstack/tempest-tests-tempest" Mar 20 16:28:18 crc kubenswrapper[4730]: I0320 16:28:18.029346 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " pod="openstack/tempest-tests-tempest" Mar 20 16:28:18 crc kubenswrapper[4730]: I0320 16:28:18.029381 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " pod="openstack/tempest-tests-tempest" Mar 20 16:28:18 crc kubenswrapper[4730]: I0320 16:28:18.029681 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " pod="openstack/tempest-tests-tempest" Mar 20 16:28:18 crc kubenswrapper[4730]: I0320 16:28:18.029749 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tempest-tests-tempest" Mar 20 16:28:18 crc kubenswrapper[4730]: I0320 16:28:18.029778 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " pod="openstack/tempest-tests-tempest" Mar 20 16:28:18 crc kubenswrapper[4730]: I0320 16:28:18.030517 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-config-data\") pod \"tempest-tests-tempest\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " pod="openstack/tempest-tests-tempest" Mar 20 16:28:18 crc kubenswrapper[4730]: I0320 16:28:18.031152 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " pod="openstack/tempest-tests-tempest" Mar 20 16:28:18 crc kubenswrapper[4730]: I0320 16:28:18.035909 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " pod="openstack/tempest-tests-tempest" Mar 20 16:28:18 crc kubenswrapper[4730]: I0320 16:28:18.035920 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " pod="openstack/tempest-tests-tempest" Mar 20 16:28:18 crc kubenswrapper[4730]: I0320 16:28:18.041107 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " pod="openstack/tempest-tests-tempest" Mar 20 16:28:18 crc kubenswrapper[4730]: I0320 16:28:18.045931 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75t8r\" (UniqueName: \"kubernetes.io/projected/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-kube-api-access-75t8r\") pod \"tempest-tests-tempest\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " pod="openstack/tempest-tests-tempest" Mar 20 16:28:18 crc kubenswrapper[4730]: I0320 16:28:18.064907 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " pod="openstack/tempest-tests-tempest" Mar 20 16:28:18 crc kubenswrapper[4730]: I0320 16:28:18.208032 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 20 16:28:18 crc kubenswrapper[4730]: I0320 16:28:18.689280 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Mar 20 16:28:19 crc kubenswrapper[4730]: I0320 16:28:19.385133 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"c69a80b5-69a7-48c5-8ad4-5063b6cb4676","Type":"ContainerStarted","Data":"3a3b9dc78f4221095ec1a260d19a50071b9bafd10f8f90a8b372cb1bb88e13e5"} Mar 20 16:28:24 crc kubenswrapper[4730]: I0320 16:28:24.221431 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tbwsz" Mar 20 16:28:24 crc kubenswrapper[4730]: I0320 16:28:24.289448 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tbwsz" Mar 20 16:28:25 crc kubenswrapper[4730]: I0320 16:28:25.019987 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tbwsz"] Mar 20 16:28:25 crc kubenswrapper[4730]: I0320 16:28:25.450870 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tbwsz" podUID="d4b708fa-9cfd-4986-b8cf-829083a898dc" containerName="registry-server" containerID="cri-o://761a57e8015a72556b7fa86e30ae3e038c8a644cf3b8bde8484efaceede61fb9" gracePeriod=2 Mar 20 16:28:26 crc kubenswrapper[4730]: I0320 16:28:26.460119 4730 generic.go:334] "Generic (PLEG): container finished" podID="d4b708fa-9cfd-4986-b8cf-829083a898dc" containerID="761a57e8015a72556b7fa86e30ae3e038c8a644cf3b8bde8484efaceede61fb9" exitCode=0 Mar 20 16:28:26 crc kubenswrapper[4730]: I0320 16:28:26.460190 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbwsz" event={"ID":"d4b708fa-9cfd-4986-b8cf-829083a898dc","Type":"ContainerDied","Data":"761a57e8015a72556b7fa86e30ae3e038c8a644cf3b8bde8484efaceede61fb9"} Mar 20 16:28:28 crc kubenswrapper[4730]: I0320 16:28:28.293282 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tbwsz" Mar 20 16:28:28 crc kubenswrapper[4730]: I0320 16:28:28.370765 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b708fa-9cfd-4986-b8cf-829083a898dc-utilities\") pod \"d4b708fa-9cfd-4986-b8cf-829083a898dc\" (UID: \"d4b708fa-9cfd-4986-b8cf-829083a898dc\") " Mar 20 16:28:28 crc kubenswrapper[4730]: I0320 16:28:28.370969 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kv8w\" (UniqueName: \"kubernetes.io/projected/d4b708fa-9cfd-4986-b8cf-829083a898dc-kube-api-access-6kv8w\") pod \"d4b708fa-9cfd-4986-b8cf-829083a898dc\" (UID: \"d4b708fa-9cfd-4986-b8cf-829083a898dc\") " Mar 20 16:28:28 crc kubenswrapper[4730]: I0320 16:28:28.371080 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b708fa-9cfd-4986-b8cf-829083a898dc-catalog-content\") pod \"d4b708fa-9cfd-4986-b8cf-829083a898dc\" (UID: \"d4b708fa-9cfd-4986-b8cf-829083a898dc\") " Mar 20 16:28:28 crc kubenswrapper[4730]: I0320 16:28:28.375681 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4b708fa-9cfd-4986-b8cf-829083a898dc-kube-api-access-6kv8w" (OuterVolumeSpecName: "kube-api-access-6kv8w") pod "d4b708fa-9cfd-4986-b8cf-829083a898dc" (UID: "d4b708fa-9cfd-4986-b8cf-829083a898dc"). InnerVolumeSpecName "kube-api-access-6kv8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:28:28 crc kubenswrapper[4730]: I0320 16:28:28.376463 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4b708fa-9cfd-4986-b8cf-829083a898dc-utilities" (OuterVolumeSpecName: "utilities") pod "d4b708fa-9cfd-4986-b8cf-829083a898dc" (UID: "d4b708fa-9cfd-4986-b8cf-829083a898dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:28:28 crc kubenswrapper[4730]: I0320 16:28:28.473092 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b708fa-9cfd-4986-b8cf-829083a898dc-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 16:28:28 crc kubenswrapper[4730]: I0320 16:28:28.473140 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kv8w\" (UniqueName: \"kubernetes.io/projected/d4b708fa-9cfd-4986-b8cf-829083a898dc-kube-api-access-6kv8w\") on node \"crc\" DevicePath \"\"" Mar 20 16:28:28 crc kubenswrapper[4730]: I0320 16:28:28.476928 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4b708fa-9cfd-4986-b8cf-829083a898dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4b708fa-9cfd-4986-b8cf-829083a898dc" (UID: "d4b708fa-9cfd-4986-b8cf-829083a898dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:28:28 crc kubenswrapper[4730]: I0320 16:28:28.484651 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbwsz" event={"ID":"d4b708fa-9cfd-4986-b8cf-829083a898dc","Type":"ContainerDied","Data":"bd691c4e3989104a40a71572866dcc51a30677ea52b34eea58c7ab44f894bd33"} Mar 20 16:28:28 crc kubenswrapper[4730]: I0320 16:28:28.484712 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tbwsz" Mar 20 16:28:28 crc kubenswrapper[4730]: I0320 16:28:28.484713 4730 scope.go:117] "RemoveContainer" containerID="761a57e8015a72556b7fa86e30ae3e038c8a644cf3b8bde8484efaceede61fb9" Mar 20 16:28:28 crc kubenswrapper[4730]: I0320 16:28:28.515737 4730 scope.go:117] "RemoveContainer" containerID="2c7c685807feee902d4eb4b91c7981a5208919ad906e30a3ca544ea7bc705a9f" Mar 20 16:28:28 crc kubenswrapper[4730]: I0320 16:28:28.530195 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tbwsz"] Mar 20 16:28:28 crc kubenswrapper[4730]: I0320 16:28:28.541534 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tbwsz"] Mar 20 16:28:28 crc kubenswrapper[4730]: I0320 16:28:28.550896 4730 scope.go:117] "RemoveContainer" containerID="3f7a8e2bcf60a0b2f78a1591703724fb4e894f9989953a89c4c5bdecd0d31e6b" Mar 20 16:28:28 crc kubenswrapper[4730]: I0320 16:28:28.576365 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b708fa-9cfd-4986-b8cf-829083a898dc-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 16:28:29 crc kubenswrapper[4730]: I0320 16:28:29.495173 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"c69a80b5-69a7-48c5-8ad4-5063b6cb4676","Type":"ContainerStarted","Data":"7d54b7219e1263f587317fcdebaae6f3c46012a7941ad45c24813ffa14627f5b"} Mar 20 16:28:29 crc kubenswrapper[4730]: I0320 16:28:29.521122 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.141357662 podStartE2EDuration="13.521103426s" podCreationTimestamp="2026-03-20 16:28:16 +0000 UTC" firstStartedPulling="2026-03-20 16:28:18.692596798 +0000 UTC m=+2957.905968167" lastFinishedPulling="2026-03-20 16:28:28.072342562 +0000 UTC m=+2967.285713931" observedRunningTime="2026-03-20 16:28:29.516539797 +0000 UTC m=+2968.729911166" watchObservedRunningTime="2026-03-20 16:28:29.521103426 +0000 UTC m=+2968.734474785" Mar 20 16:28:29 crc kubenswrapper[4730]: I0320 16:28:29.546239 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4b708fa-9cfd-4986-b8cf-829083a898dc" path="/var/lib/kubelet/pods/d4b708fa-9cfd-4986-b8cf-829083a898dc/volumes" Mar 20 16:28:42 crc kubenswrapper[4730]: I0320 16:28:42.880872 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:28:42 crc kubenswrapper[4730]: I0320 16:28:42.881541 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:28:49 crc kubenswrapper[4730]: I0320 16:28:49.971130 4730 scope.go:117] "RemoveContainer" containerID="0ccfc2a0baaac1ea53dda2b6020b62a6460f7d7641aebee239ddb879e9c99ccb" Mar 20 16:29:12 crc kubenswrapper[4730]: I0320 16:29:12.880311 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:29:12 crc kubenswrapper[4730]: I0320 16:29:12.881158 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:29:12 crc kubenswrapper[4730]: I0320 16:29:12.882434 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 16:29:12 crc kubenswrapper[4730]: I0320 16:29:12.883510 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f26b9418791737b63a44943ab58f9d5995a9697abde137f76404e07e867c6e5f"} pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 16:29:12 crc kubenswrapper[4730]: I0320 16:29:12.883576 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" containerID="cri-o://f26b9418791737b63a44943ab58f9d5995a9697abde137f76404e07e867c6e5f" gracePeriod=600 Mar 20 16:29:13 crc kubenswrapper[4730]: I0320 16:29:13.931693 4730 generic.go:334] "Generic (PLEG): container finished" podID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerID="f26b9418791737b63a44943ab58f9d5995a9697abde137f76404e07e867c6e5f" exitCode=0 Mar 20 16:29:13 crc kubenswrapper[4730]: I0320 16:29:13.931980 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerDied","Data":"f26b9418791737b63a44943ab58f9d5995a9697abde137f76404e07e867c6e5f"} Mar 20 16:29:13 crc kubenswrapper[4730]: I0320 16:29:13.932319 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerStarted","Data":"0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34"} Mar 20 16:29:13 crc kubenswrapper[4730]: I0320 16:29:13.932347 4730 scope.go:117] "RemoveContainer" containerID="7c543dcd07b51b6540df53377dd2e3fd92636df6153928eaa86c229ad0e8e213" Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.173002 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567070-h8qb5"] Mar 20 16:30:00 crc kubenswrapper[4730]: E0320 16:30:00.174459 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b708fa-9cfd-4986-b8cf-829083a898dc" containerName="extract-content" Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.174553 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b708fa-9cfd-4986-b8cf-829083a898dc" containerName="extract-content" Mar 20 16:30:00 crc kubenswrapper[4730]: E0320 16:30:00.174613 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b708fa-9cfd-4986-b8cf-829083a898dc" containerName="registry-server" Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.174652 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b708fa-9cfd-4986-b8cf-829083a898dc" containerName="registry-server" Mar 20 16:30:00 crc kubenswrapper[4730]: E0320 16:30:00.174706 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b708fa-9cfd-4986-b8cf-829083a898dc" containerName="extract-utilities" Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.174718 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b708fa-9cfd-4986-b8cf-829083a898dc" containerName="extract-utilities" Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.175098 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4b708fa-9cfd-4986-b8cf-829083a898dc" containerName="registry-server" Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.176175 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567070-h8qb5" Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.181887 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.181991 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.182059 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567070-5g4gp"] Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.183811 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567070-5g4gp" Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.184624 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.185518 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.186220 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.191533 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567070-5g4gp"] Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.222604 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567070-h8qb5"] Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.266959 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c86d92cc-d42e-496f-b31c-d6c56fb441c7-secret-volume\") pod \"collect-profiles-29567070-5g4gp\" (UID: \"c86d92cc-d42e-496f-b31c-d6c56fb441c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567070-5g4gp" Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.266998 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c86d92cc-d42e-496f-b31c-d6c56fb441c7-config-volume\") pod \"collect-profiles-29567070-5g4gp\" (UID: \"c86d92cc-d42e-496f-b31c-d6c56fb441c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567070-5g4gp" Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.267172 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk5m2\" (UniqueName: \"kubernetes.io/projected/c86d92cc-d42e-496f-b31c-d6c56fb441c7-kube-api-access-wk5m2\") pod \"collect-profiles-29567070-5g4gp\" (UID: \"c86d92cc-d42e-496f-b31c-d6c56fb441c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567070-5g4gp" Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.267198 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d94nb\" (UniqueName: \"kubernetes.io/projected/f50e1094-eded-4000-b7f3-29722d8ba695-kube-api-access-d94nb\") pod \"auto-csr-approver-29567070-h8qb5\" (UID: \"f50e1094-eded-4000-b7f3-29722d8ba695\") " pod="openshift-infra/auto-csr-approver-29567070-h8qb5" Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.369622 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c86d92cc-d42e-496f-b31c-d6c56fb441c7-secret-volume\") pod \"collect-profiles-29567070-5g4gp\" (UID: \"c86d92cc-d42e-496f-b31c-d6c56fb441c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567070-5g4gp" Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.369691 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c86d92cc-d42e-496f-b31c-d6c56fb441c7-config-volume\") pod \"collect-profiles-29567070-5g4gp\" (UID: \"c86d92cc-d42e-496f-b31c-d6c56fb441c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567070-5g4gp" Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.369874 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk5m2\" (UniqueName: \"kubernetes.io/projected/c86d92cc-d42e-496f-b31c-d6c56fb441c7-kube-api-access-wk5m2\") pod \"collect-profiles-29567070-5g4gp\" (UID: \"c86d92cc-d42e-496f-b31c-d6c56fb441c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567070-5g4gp" Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.369904 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d94nb\" (UniqueName: \"kubernetes.io/projected/f50e1094-eded-4000-b7f3-29722d8ba695-kube-api-access-d94nb\") pod \"auto-csr-approver-29567070-h8qb5\" (UID: \"f50e1094-eded-4000-b7f3-29722d8ba695\") " pod="openshift-infra/auto-csr-approver-29567070-h8qb5" Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.370688 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c86d92cc-d42e-496f-b31c-d6c56fb441c7-config-volume\") pod \"collect-profiles-29567070-5g4gp\" (UID: \"c86d92cc-d42e-496f-b31c-d6c56fb441c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567070-5g4gp" Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.386810 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c86d92cc-d42e-496f-b31c-d6c56fb441c7-secret-volume\") pod \"collect-profiles-29567070-5g4gp\" (UID: \"c86d92cc-d42e-496f-b31c-d6c56fb441c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567070-5g4gp" Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.390393 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d94nb\" (UniqueName: \"kubernetes.io/projected/f50e1094-eded-4000-b7f3-29722d8ba695-kube-api-access-d94nb\") pod \"auto-csr-approver-29567070-h8qb5\" (UID: \"f50e1094-eded-4000-b7f3-29722d8ba695\") " pod="openshift-infra/auto-csr-approver-29567070-h8qb5" Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.391531 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk5m2\" (UniqueName: \"kubernetes.io/projected/c86d92cc-d42e-496f-b31c-d6c56fb441c7-kube-api-access-wk5m2\") pod \"collect-profiles-29567070-5g4gp\" (UID: \"c86d92cc-d42e-496f-b31c-d6c56fb441c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567070-5g4gp" Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.513030 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567070-h8qb5" Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.524728 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567070-5g4gp" Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.988551 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567070-h8qb5"] Mar 20 16:30:00 crc kubenswrapper[4730]: W0320 16:30:00.993353 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc86d92cc_d42e_496f_b31c_d6c56fb441c7.slice/crio-8b7d672e1388cfbe49cf1dd56d2c3d2ea5b0d87030b7b41cd27cd89e599652cc WatchSource:0}: Error finding container 8b7d672e1388cfbe49cf1dd56d2c3d2ea5b0d87030b7b41cd27cd89e599652cc: Status 404 returned error can't find the container with id 8b7d672e1388cfbe49cf1dd56d2c3d2ea5b0d87030b7b41cd27cd89e599652cc Mar 20 16:30:00 crc kubenswrapper[4730]: I0320 16:30:00.998667 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567070-5g4gp"] Mar 20 16:30:01 crc kubenswrapper[4730]: I0320 16:30:01.366995 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567070-5g4gp" event={"ID":"c86d92cc-d42e-496f-b31c-d6c56fb441c7","Type":"ContainerStarted","Data":"dc2c07b3766f06e0423270d40c09a7a028e4cbca82d59a060deedb7b5661816a"} Mar 20 16:30:01 crc kubenswrapper[4730]: I0320 16:30:01.367054 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567070-5g4gp" event={"ID":"c86d92cc-d42e-496f-b31c-d6c56fb441c7","Type":"ContainerStarted","Data":"8b7d672e1388cfbe49cf1dd56d2c3d2ea5b0d87030b7b41cd27cd89e599652cc"} Mar 20 16:30:01 crc kubenswrapper[4730]: I0320 16:30:01.369820 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567070-h8qb5" event={"ID":"f50e1094-eded-4000-b7f3-29722d8ba695","Type":"ContainerStarted","Data":"b09d2473c6efd02c36230d95fa3de33d5cdb445c921f8ad156b8354f2a60aa3b"} Mar 20 16:30:01 crc kubenswrapper[4730]: I0320 16:30:01.387591 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29567070-5g4gp" podStartSLOduration=1.3875718049999999 podStartE2EDuration="1.387571805s" podCreationTimestamp="2026-03-20 16:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 16:30:01.384078059 +0000 UTC m=+3060.597449448" watchObservedRunningTime="2026-03-20 16:30:01.387571805 +0000 UTC m=+3060.600943174" Mar 20 16:30:02 crc kubenswrapper[4730]: I0320 16:30:02.380409 4730 generic.go:334] "Generic (PLEG): container finished" podID="c86d92cc-d42e-496f-b31c-d6c56fb441c7" containerID="dc2c07b3766f06e0423270d40c09a7a028e4cbca82d59a060deedb7b5661816a" exitCode=0 Mar 20 16:30:02 crc kubenswrapper[4730]: I0320 16:30:02.380465 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567070-5g4gp" event={"ID":"c86d92cc-d42e-496f-b31c-d6c56fb441c7","Type":"ContainerDied","Data":"dc2c07b3766f06e0423270d40c09a7a028e4cbca82d59a060deedb7b5661816a"} Mar 20 16:30:02 crc kubenswrapper[4730]: E0320 16:30:02.826724 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc86d92cc_d42e_496f_b31c_d6c56fb441c7.slice/crio-dc2c07b3766f06e0423270d40c09a7a028e4cbca82d59a060deedb7b5661816a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc86d92cc_d42e_496f_b31c_d6c56fb441c7.slice/crio-conmon-dc2c07b3766f06e0423270d40c09a7a028e4cbca82d59a060deedb7b5661816a.scope\": RecentStats: unable to find data in memory cache]" Mar 20 16:30:03 crc kubenswrapper[4730]: I0320 16:30:03.780655 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567070-5g4gp" Mar 20 16:30:03 crc kubenswrapper[4730]: I0320 16:30:03.841526 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c86d92cc-d42e-496f-b31c-d6c56fb441c7-secret-volume\") pod \"c86d92cc-d42e-496f-b31c-d6c56fb441c7\" (UID: \"c86d92cc-d42e-496f-b31c-d6c56fb441c7\") " Mar 20 16:30:03 crc kubenswrapper[4730]: I0320 16:30:03.841665 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c86d92cc-d42e-496f-b31c-d6c56fb441c7-config-volume\") pod \"c86d92cc-d42e-496f-b31c-d6c56fb441c7\" (UID: \"c86d92cc-d42e-496f-b31c-d6c56fb441c7\") " Mar 20 16:30:03 crc kubenswrapper[4730]: I0320 16:30:03.841957 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk5m2\" (UniqueName: \"kubernetes.io/projected/c86d92cc-d42e-496f-b31c-d6c56fb441c7-kube-api-access-wk5m2\") pod \"c86d92cc-d42e-496f-b31c-d6c56fb441c7\" (UID: \"c86d92cc-d42e-496f-b31c-d6c56fb441c7\") " Mar 20 16:30:03 crc kubenswrapper[4730]: I0320 16:30:03.844504 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c86d92cc-d42e-496f-b31c-d6c56fb441c7-config-volume" (OuterVolumeSpecName: "config-volume") pod "c86d92cc-d42e-496f-b31c-d6c56fb441c7" (UID: "c86d92cc-d42e-496f-b31c-d6c56fb441c7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:30:03 crc kubenswrapper[4730]: I0320 16:30:03.852162 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c86d92cc-d42e-496f-b31c-d6c56fb441c7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c86d92cc-d42e-496f-b31c-d6c56fb441c7" (UID: "c86d92cc-d42e-496f-b31c-d6c56fb441c7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:30:03 crc kubenswrapper[4730]: I0320 16:30:03.854371 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c86d92cc-d42e-496f-b31c-d6c56fb441c7-kube-api-access-wk5m2" (OuterVolumeSpecName: "kube-api-access-wk5m2") pod "c86d92cc-d42e-496f-b31c-d6c56fb441c7" (UID: "c86d92cc-d42e-496f-b31c-d6c56fb441c7"). InnerVolumeSpecName "kube-api-access-wk5m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:30:03 crc kubenswrapper[4730]: I0320 16:30:03.944456 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk5m2\" (UniqueName: \"kubernetes.io/projected/c86d92cc-d42e-496f-b31c-d6c56fb441c7-kube-api-access-wk5m2\") on node \"crc\" DevicePath \"\"" Mar 20 16:30:03 crc kubenswrapper[4730]: I0320 16:30:03.944484 4730 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c86d92cc-d42e-496f-b31c-d6c56fb441c7-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 16:30:03 crc kubenswrapper[4730]: I0320 16:30:03.944528 4730 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c86d92cc-d42e-496f-b31c-d6c56fb441c7-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 16:30:04 crc kubenswrapper[4730]: I0320 16:30:04.416344 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567070-5g4gp" event={"ID":"c86d92cc-d42e-496f-b31c-d6c56fb441c7","Type":"ContainerDied","Data":"8b7d672e1388cfbe49cf1dd56d2c3d2ea5b0d87030b7b41cd27cd89e599652cc"} Mar 20 16:30:04 crc kubenswrapper[4730]: I0320 16:30:04.416644 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b7d672e1388cfbe49cf1dd56d2c3d2ea5b0d87030b7b41cd27cd89e599652cc" Mar 20 16:30:04 crc kubenswrapper[4730]: I0320 16:30:04.416713 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567070-5g4gp" Mar 20 16:30:04 crc kubenswrapper[4730]: I0320 16:30:04.472932 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567025-sp8pk"] Mar 20 16:30:04 crc kubenswrapper[4730]: I0320 16:30:04.484405 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567025-sp8pk"] Mar 20 16:30:05 crc kubenswrapper[4730]: I0320 16:30:05.427853 4730 generic.go:334] "Generic (PLEG): container finished" podID="f50e1094-eded-4000-b7f3-29722d8ba695" containerID="20c367b1f4c39cf9e28d8318713966c9d59ce69c25b518b2e48b38d0f034fa5d" exitCode=0 Mar 20 16:30:05 crc kubenswrapper[4730]: I0320 16:30:05.427920 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567070-h8qb5" event={"ID":"f50e1094-eded-4000-b7f3-29722d8ba695","Type":"ContainerDied","Data":"20c367b1f4c39cf9e28d8318713966c9d59ce69c25b518b2e48b38d0f034fa5d"} Mar 20 16:30:05 crc kubenswrapper[4730]: I0320 16:30:05.548849 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db3d4357-8143-45e9-ab45-e55f54735cbc" path="/var/lib/kubelet/pods/db3d4357-8143-45e9-ab45-e55f54735cbc/volumes" Mar 20 16:30:06 crc kubenswrapper[4730]: I0320 16:30:06.818106 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567070-h8qb5" Mar 20 16:30:06 crc kubenswrapper[4730]: I0320 16:30:06.910318 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d94nb\" (UniqueName: \"kubernetes.io/projected/f50e1094-eded-4000-b7f3-29722d8ba695-kube-api-access-d94nb\") pod \"f50e1094-eded-4000-b7f3-29722d8ba695\" (UID: \"f50e1094-eded-4000-b7f3-29722d8ba695\") " Mar 20 16:30:06 crc kubenswrapper[4730]: I0320 16:30:06.921649 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f50e1094-eded-4000-b7f3-29722d8ba695-kube-api-access-d94nb" (OuterVolumeSpecName: "kube-api-access-d94nb") pod "f50e1094-eded-4000-b7f3-29722d8ba695" (UID: "f50e1094-eded-4000-b7f3-29722d8ba695"). InnerVolumeSpecName "kube-api-access-d94nb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:30:07 crc kubenswrapper[4730]: I0320 16:30:07.014199 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d94nb\" (UniqueName: \"kubernetes.io/projected/f50e1094-eded-4000-b7f3-29722d8ba695-kube-api-access-d94nb\") on node \"crc\" DevicePath \"\"" Mar 20 16:30:07 crc kubenswrapper[4730]: I0320 16:30:07.446435 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567070-h8qb5" event={"ID":"f50e1094-eded-4000-b7f3-29722d8ba695","Type":"ContainerDied","Data":"b09d2473c6efd02c36230d95fa3de33d5cdb445c921f8ad156b8354f2a60aa3b"} Mar 20 16:30:07 crc kubenswrapper[4730]: I0320 16:30:07.446480 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b09d2473c6efd02c36230d95fa3de33d5cdb445c921f8ad156b8354f2a60aa3b" Mar 20 16:30:07 crc kubenswrapper[4730]: I0320 16:30:07.446477 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567070-h8qb5" Mar 20 16:30:07 crc kubenswrapper[4730]: I0320 16:30:07.885342 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567064-sklm5"] Mar 20 16:30:07 crc kubenswrapper[4730]: I0320 16:30:07.894570 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567064-sklm5"] Mar 20 16:30:09 crc kubenswrapper[4730]: I0320 16:30:09.545508 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a36e403d-410f-40cc-8441-66c444837d24" path="/var/lib/kubelet/pods/a36e403d-410f-40cc-8441-66c444837d24/volumes" Mar 20 16:30:13 crc kubenswrapper[4730]: E0320 16:30:13.096985 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc86d92cc_d42e_496f_b31c_d6c56fb441c7.slice/crio-conmon-dc2c07b3766f06e0423270d40c09a7a028e4cbca82d59a060deedb7b5661816a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc86d92cc_d42e_496f_b31c_d6c56fb441c7.slice/crio-dc2c07b3766f06e0423270d40c09a7a028e4cbca82d59a060deedb7b5661816a.scope\": RecentStats: unable to find data in memory cache]" Mar 20 16:30:23 crc kubenswrapper[4730]: E0320 16:30:23.348924 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc86d92cc_d42e_496f_b31c_d6c56fb441c7.slice/crio-conmon-dc2c07b3766f06e0423270d40c09a7a028e4cbca82d59a060deedb7b5661816a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc86d92cc_d42e_496f_b31c_d6c56fb441c7.slice/crio-dc2c07b3766f06e0423270d40c09a7a028e4cbca82d59a060deedb7b5661816a.scope\": RecentStats: unable to find data in memory cache]" Mar 20 16:30:33 crc kubenswrapper[4730]: E0320 16:30:33.597135 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc86d92cc_d42e_496f_b31c_d6c56fb441c7.slice/crio-conmon-dc2c07b3766f06e0423270d40c09a7a028e4cbca82d59a060deedb7b5661816a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc86d92cc_d42e_496f_b31c_d6c56fb441c7.slice/crio-dc2c07b3766f06e0423270d40c09a7a028e4cbca82d59a060deedb7b5661816a.scope\": RecentStats: unable to find data in memory cache]" Mar 20 16:30:43 crc kubenswrapper[4730]: E0320 16:30:43.862098 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc86d92cc_d42e_496f_b31c_d6c56fb441c7.slice/crio-conmon-dc2c07b3766f06e0423270d40c09a7a028e4cbca82d59a060deedb7b5661816a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc86d92cc_d42e_496f_b31c_d6c56fb441c7.slice/crio-dc2c07b3766f06e0423270d40c09a7a028e4cbca82d59a060deedb7b5661816a.scope\": RecentStats: unable to find data in memory cache]" Mar 20 16:30:50 crc kubenswrapper[4730]: I0320 16:30:50.277717 4730 scope.go:117] "RemoveContainer" containerID="7961ca89ce2a460b127b00611370ac925492414c79b33a7aef5d34aaea8acb7f" Mar 20 16:30:50 crc kubenswrapper[4730]: I0320 16:30:50.325332 4730 scope.go:117] "RemoveContainer" containerID="1718e89ccd737bfb9a3619c68d04fdd94aa68dd80d2bc675347f69c2cc40fd04" Mar 20 16:30:54 crc kubenswrapper[4730]: E0320 16:30:54.102805 4730 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc86d92cc_d42e_496f_b31c_d6c56fb441c7.slice/crio-conmon-dc2c07b3766f06e0423270d40c09a7a028e4cbca82d59a060deedb7b5661816a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc86d92cc_d42e_496f_b31c_d6c56fb441c7.slice/crio-dc2c07b3766f06e0423270d40c09a7a028e4cbca82d59a060deedb7b5661816a.scope\": RecentStats: unable to find data in memory cache]" Mar 20 16:31:42 crc kubenswrapper[4730]: I0320 16:31:42.880652 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:31:42 crc kubenswrapper[4730]: I0320 16:31:42.881089 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:32:00 crc kubenswrapper[4730]: I0320 16:32:00.161991 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567072-jzsw2"] Mar 20 16:32:00 crc kubenswrapper[4730]: E0320 16:32:00.163062 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f50e1094-eded-4000-b7f3-29722d8ba695" containerName="oc" Mar 20 16:32:00 crc kubenswrapper[4730]: I0320 16:32:00.163077 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f50e1094-eded-4000-b7f3-29722d8ba695" containerName="oc" Mar 20 16:32:00 crc kubenswrapper[4730]: E0320 16:32:00.163095 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c86d92cc-d42e-496f-b31c-d6c56fb441c7" containerName="collect-profiles" Mar 20 16:32:00 crc kubenswrapper[4730]: I0320 16:32:00.163103 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c86d92cc-d42e-496f-b31c-d6c56fb441c7" containerName="collect-profiles" Mar 20 16:32:00 crc kubenswrapper[4730]: I0320 16:32:00.163416 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f50e1094-eded-4000-b7f3-29722d8ba695" containerName="oc" Mar 20 16:32:00 crc kubenswrapper[4730]: I0320 16:32:00.163439 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c86d92cc-d42e-496f-b31c-d6c56fb441c7" containerName="collect-profiles" Mar 20 16:32:00 crc kubenswrapper[4730]: I0320 16:32:00.164566 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567072-jzsw2" Mar 20 16:32:00 crc kubenswrapper[4730]: I0320 16:32:00.167848 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:32:00 crc kubenswrapper[4730]: I0320 16:32:00.168174 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:32:00 crc kubenswrapper[4730]: I0320 16:32:00.169438 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:32:00 crc kubenswrapper[4730]: I0320 16:32:00.182169 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567072-jzsw2"] Mar 20 16:32:00 crc kubenswrapper[4730]: I0320 16:32:00.284670 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjgcc\" (UniqueName: \"kubernetes.io/projected/cadf5c48-6db4-421c-977d-1216334a9383-kube-api-access-hjgcc\") pod \"auto-csr-approver-29567072-jzsw2\" (UID: \"cadf5c48-6db4-421c-977d-1216334a9383\") " pod="openshift-infra/auto-csr-approver-29567072-jzsw2" Mar 20 16:32:00 crc kubenswrapper[4730]: I0320 16:32:00.386371 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjgcc\" (UniqueName: \"kubernetes.io/projected/cadf5c48-6db4-421c-977d-1216334a9383-kube-api-access-hjgcc\") pod \"auto-csr-approver-29567072-jzsw2\" (UID: \"cadf5c48-6db4-421c-977d-1216334a9383\") " pod="openshift-infra/auto-csr-approver-29567072-jzsw2" Mar 20 16:32:00 crc kubenswrapper[4730]: I0320 16:32:00.405728 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjgcc\" (UniqueName: \"kubernetes.io/projected/cadf5c48-6db4-421c-977d-1216334a9383-kube-api-access-hjgcc\") pod \"auto-csr-approver-29567072-jzsw2\" (UID: \"cadf5c48-6db4-421c-977d-1216334a9383\") " pod="openshift-infra/auto-csr-approver-29567072-jzsw2" Mar 20 16:32:00 crc kubenswrapper[4730]: I0320 16:32:00.484549 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567072-jzsw2" Mar 20 16:32:00 crc kubenswrapper[4730]: I0320 16:32:00.964976 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567072-jzsw2"] Mar 20 16:32:01 crc kubenswrapper[4730]: I0320 16:32:01.581343 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567072-jzsw2" event={"ID":"cadf5c48-6db4-421c-977d-1216334a9383","Type":"ContainerStarted","Data":"6f4c8959179fbc514db5c8ef46c1c92062e00de72c7204d982e7a00a0ed553a3"} Mar 20 16:32:03 crc kubenswrapper[4730]: I0320 16:32:03.630391 4730 generic.go:334] "Generic (PLEG): container finished" podID="cadf5c48-6db4-421c-977d-1216334a9383" containerID="e43508074aa0c7c7c61cb53a8852f8061943211007b9394c89ac6a8a6c904123" exitCode=0 Mar 20 16:32:03 crc kubenswrapper[4730]: I0320 16:32:03.630455 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567072-jzsw2" event={"ID":"cadf5c48-6db4-421c-977d-1216334a9383","Type":"ContainerDied","Data":"e43508074aa0c7c7c61cb53a8852f8061943211007b9394c89ac6a8a6c904123"} Mar 20 16:32:04 crc kubenswrapper[4730]: I0320 16:32:04.956402 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567072-jzsw2" Mar 20 16:32:05 crc kubenswrapper[4730]: I0320 16:32:05.113964 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjgcc\" (UniqueName: \"kubernetes.io/projected/cadf5c48-6db4-421c-977d-1216334a9383-kube-api-access-hjgcc\") pod \"cadf5c48-6db4-421c-977d-1216334a9383\" (UID: \"cadf5c48-6db4-421c-977d-1216334a9383\") " Mar 20 16:32:05 crc kubenswrapper[4730]: I0320 16:32:05.119659 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cadf5c48-6db4-421c-977d-1216334a9383-kube-api-access-hjgcc" (OuterVolumeSpecName: "kube-api-access-hjgcc") pod "cadf5c48-6db4-421c-977d-1216334a9383" (UID: "cadf5c48-6db4-421c-977d-1216334a9383"). InnerVolumeSpecName "kube-api-access-hjgcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:32:05 crc kubenswrapper[4730]: I0320 16:32:05.217228 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjgcc\" (UniqueName: \"kubernetes.io/projected/cadf5c48-6db4-421c-977d-1216334a9383-kube-api-access-hjgcc\") on node \"crc\" DevicePath \"\"" Mar 20 16:32:05 crc kubenswrapper[4730]: I0320 16:32:05.656674 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567072-jzsw2" event={"ID":"cadf5c48-6db4-421c-977d-1216334a9383","Type":"ContainerDied","Data":"6f4c8959179fbc514db5c8ef46c1c92062e00de72c7204d982e7a00a0ed553a3"} Mar 20 16:32:05 crc kubenswrapper[4730]: I0320 16:32:05.656939 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f4c8959179fbc514db5c8ef46c1c92062e00de72c7204d982e7a00a0ed553a3" Mar 20 16:32:05 crc kubenswrapper[4730]: I0320 16:32:05.656726 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567072-jzsw2" Mar 20 16:32:06 crc kubenswrapper[4730]: I0320 16:32:06.048834 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567066-7w6j9"] Mar 20 16:32:06 crc kubenswrapper[4730]: I0320 16:32:06.058581 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567066-7w6j9"] Mar 20 16:32:07 crc kubenswrapper[4730]: I0320 16:32:07.545203 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2996ff7e-454f-40d8-bc5c-894c45b7a58c" path="/var/lib/kubelet/pods/2996ff7e-454f-40d8-bc5c-894c45b7a58c/volumes" Mar 20 16:32:12 crc kubenswrapper[4730]: I0320 16:32:12.882401 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:32:12 crc kubenswrapper[4730]: I0320 16:32:12.883084 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:32:42 crc kubenswrapper[4730]: I0320 16:32:42.880366 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:32:42 crc kubenswrapper[4730]: I0320 16:32:42.880783 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:32:42 crc kubenswrapper[4730]: I0320 16:32:42.880830 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 16:32:42 crc kubenswrapper[4730]: I0320 16:32:42.881550 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34"} pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 16:32:42 crc kubenswrapper[4730]: I0320 16:32:42.881599 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" containerID="cri-o://0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34" gracePeriod=600 Mar 20 16:32:43 crc kubenswrapper[4730]: E0320 16:32:43.004940 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:32:43 crc kubenswrapper[4730]: I0320 16:32:43.011537 4730 generic.go:334] "Generic (PLEG): container finished" podID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerID="0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34" exitCode=0 Mar 20 16:32:43 crc kubenswrapper[4730]: I0320 16:32:43.011572 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerDied","Data":"0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34"} Mar 20 16:32:43 crc kubenswrapper[4730]: I0320 16:32:43.011601 4730 scope.go:117] "RemoveContainer" containerID="f26b9418791737b63a44943ab58f9d5995a9697abde137f76404e07e867c6e5f" Mar 20 16:32:43 crc kubenswrapper[4730]: I0320 16:32:43.012539 4730 scope.go:117] "RemoveContainer" containerID="0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34" Mar 20 16:32:43 crc kubenswrapper[4730]: E0320 16:32:43.012917 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:32:50 crc kubenswrapper[4730]: I0320 16:32:50.446519 4730 scope.go:117] "RemoveContainer" containerID="1bc53c769c23ddf21b26ded07f20d42f23d89029cfc127b5683d217f18b840d6" Mar 20 16:32:53 crc kubenswrapper[4730]: I0320 16:32:53.533428 4730 scope.go:117] "RemoveContainer" containerID="0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34" Mar 20 16:32:53 crc kubenswrapper[4730]: E0320 16:32:53.534127 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:33:06 crc kubenswrapper[4730]: I0320 16:33:06.534791 4730 scope.go:117] "RemoveContainer" containerID="0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34" Mar 20 16:33:06 crc kubenswrapper[4730]: E0320 16:33:06.535545 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:33:20 crc kubenswrapper[4730]: I0320 16:33:20.533053 4730 scope.go:117] "RemoveContainer" containerID="0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34" Mar 20 16:33:20 crc kubenswrapper[4730]: E0320 16:33:20.534444 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:33:33 crc kubenswrapper[4730]: I0320 16:33:33.532882 4730 scope.go:117] "RemoveContainer" containerID="0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34" Mar 20 16:33:33 crc kubenswrapper[4730]: E0320 16:33:33.533649 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:33:47 crc kubenswrapper[4730]: I0320 16:33:47.533921 4730 scope.go:117] "RemoveContainer" containerID="0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34" Mar 20 16:33:47 crc kubenswrapper[4730]: E0320 16:33:47.534832 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:34:00 crc kubenswrapper[4730]: I0320 16:34:00.171793 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567074-2xrgq"] Mar 20 16:34:00 crc kubenswrapper[4730]: E0320 16:34:00.172985 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cadf5c48-6db4-421c-977d-1216334a9383" containerName="oc" Mar 20 16:34:00 crc kubenswrapper[4730]: I0320 16:34:00.173003 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="cadf5c48-6db4-421c-977d-1216334a9383" containerName="oc" Mar 20 16:34:00 crc kubenswrapper[4730]: I0320 16:34:00.173301 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="cadf5c48-6db4-421c-977d-1216334a9383" containerName="oc" Mar 20 16:34:00 crc kubenswrapper[4730]: I0320 16:34:00.174154 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567074-2xrgq" Mar 20 16:34:00 crc kubenswrapper[4730]: I0320 16:34:00.175873 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhfvb\" (UniqueName: \"kubernetes.io/projected/2a649324-b73a-44e0-94e5-2b8c54476367-kube-api-access-mhfvb\") pod \"auto-csr-approver-29567074-2xrgq\" (UID: \"2a649324-b73a-44e0-94e5-2b8c54476367\") " pod="openshift-infra/auto-csr-approver-29567074-2xrgq" Mar 20 16:34:00 crc kubenswrapper[4730]: I0320 16:34:00.177663 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:34:00 crc kubenswrapper[4730]: I0320 16:34:00.177839 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:34:00 crc kubenswrapper[4730]: I0320 16:34:00.182700 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567074-2xrgq"] Mar 20 16:34:00 crc kubenswrapper[4730]: I0320 16:34:00.183567 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:34:00 crc kubenswrapper[4730]: I0320 16:34:00.277785 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhfvb\" (UniqueName: \"kubernetes.io/projected/2a649324-b73a-44e0-94e5-2b8c54476367-kube-api-access-mhfvb\") pod \"auto-csr-approver-29567074-2xrgq\" (UID: \"2a649324-b73a-44e0-94e5-2b8c54476367\") " pod="openshift-infra/auto-csr-approver-29567074-2xrgq" Mar 20 16:34:00 crc kubenswrapper[4730]: I0320 16:34:00.299290 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhfvb\" (UniqueName: \"kubernetes.io/projected/2a649324-b73a-44e0-94e5-2b8c54476367-kube-api-access-mhfvb\") pod \"auto-csr-approver-29567074-2xrgq\" (UID: \"2a649324-b73a-44e0-94e5-2b8c54476367\") " pod="openshift-infra/auto-csr-approver-29567074-2xrgq" Mar 20 16:34:00 crc kubenswrapper[4730]: I0320 16:34:00.492944 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567074-2xrgq" Mar 20 16:34:01 crc kubenswrapper[4730]: I0320 16:34:01.025229 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567074-2xrgq"] Mar 20 16:34:01 crc kubenswrapper[4730]: I0320 16:34:01.028112 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 16:34:01 crc kubenswrapper[4730]: I0320 16:34:01.545551 4730 scope.go:117] "RemoveContainer" containerID="0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34" Mar 20 16:34:01 crc kubenswrapper[4730]: E0320 16:34:01.546198 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:34:01 crc kubenswrapper[4730]: I0320 16:34:01.771238 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567074-2xrgq" event={"ID":"2a649324-b73a-44e0-94e5-2b8c54476367","Type":"ContainerStarted","Data":"2dbc3338f430f65a77101dc0cc6803709cb1f4c2df202c94dde2ff10ca47ffca"} Mar 20 16:34:02 crc kubenswrapper[4730]: I0320 16:34:02.784753 4730 generic.go:334] "Generic (PLEG): container finished" podID="2a649324-b73a-44e0-94e5-2b8c54476367" containerID="2bb1a712fbfcbaa124ee788c9be392cdb5ddacf514828a35ba09574bc19839a4" exitCode=0 Mar 20 16:34:02 crc kubenswrapper[4730]: I0320 16:34:02.784974 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567074-2xrgq" event={"ID":"2a649324-b73a-44e0-94e5-2b8c54476367","Type":"ContainerDied","Data":"2bb1a712fbfcbaa124ee788c9be392cdb5ddacf514828a35ba09574bc19839a4"} Mar 20 16:34:04 crc kubenswrapper[4730]: I0320 16:34:04.166744 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567074-2xrgq" Mar 20 16:34:04 crc kubenswrapper[4730]: I0320 16:34:04.267395 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhfvb\" (UniqueName: \"kubernetes.io/projected/2a649324-b73a-44e0-94e5-2b8c54476367-kube-api-access-mhfvb\") pod \"2a649324-b73a-44e0-94e5-2b8c54476367\" (UID: \"2a649324-b73a-44e0-94e5-2b8c54476367\") " Mar 20 16:34:04 crc kubenswrapper[4730]: I0320 16:34:04.273135 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a649324-b73a-44e0-94e5-2b8c54476367-kube-api-access-mhfvb" (OuterVolumeSpecName: "kube-api-access-mhfvb") pod "2a649324-b73a-44e0-94e5-2b8c54476367" (UID: "2a649324-b73a-44e0-94e5-2b8c54476367"). InnerVolumeSpecName "kube-api-access-mhfvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:34:04 crc kubenswrapper[4730]: I0320 16:34:04.370577 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhfvb\" (UniqueName: \"kubernetes.io/projected/2a649324-b73a-44e0-94e5-2b8c54476367-kube-api-access-mhfvb\") on node \"crc\" DevicePath \"\"" Mar 20 16:34:04 crc kubenswrapper[4730]: I0320 16:34:04.804870 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567074-2xrgq" event={"ID":"2a649324-b73a-44e0-94e5-2b8c54476367","Type":"ContainerDied","Data":"2dbc3338f430f65a77101dc0cc6803709cb1f4c2df202c94dde2ff10ca47ffca"} Mar 20 16:34:04 crc kubenswrapper[4730]: I0320 16:34:04.805213 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dbc3338f430f65a77101dc0cc6803709cb1f4c2df202c94dde2ff10ca47ffca" Mar 20 16:34:04 crc kubenswrapper[4730]: I0320 16:34:04.804942 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567074-2xrgq" Mar 20 16:34:05 crc kubenswrapper[4730]: I0320 16:34:05.247374 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567068-d884x"] Mar 20 16:34:05 crc kubenswrapper[4730]: I0320 16:34:05.254318 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567068-d884x"] Mar 20 16:34:05 crc kubenswrapper[4730]: I0320 16:34:05.544048 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02177dc8-25be-4462-afba-d87fda4396c6" path="/var/lib/kubelet/pods/02177dc8-25be-4462-afba-d87fda4396c6/volumes" Mar 20 16:34:14 crc kubenswrapper[4730]: I0320 16:34:14.533857 4730 scope.go:117] "RemoveContainer" containerID="0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34" Mar 20 16:34:14 crc kubenswrapper[4730]: E0320 16:34:14.534788 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:34:26 crc kubenswrapper[4730]: I0320 16:34:26.533026 4730 scope.go:117] "RemoveContainer" containerID="0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34" Mar 20 16:34:26 crc kubenswrapper[4730]: E0320 16:34:26.534715 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:34:37 crc kubenswrapper[4730]: I0320 16:34:37.533802 4730 scope.go:117] "RemoveContainer" containerID="0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34" Mar 20 16:34:37 crc kubenswrapper[4730]: E0320 16:34:37.534687 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:34:50 crc kubenswrapper[4730]: I0320 16:34:50.540183 4730 scope.go:117] "RemoveContainer" containerID="82eb38bbfad57d58cc830a2f14037d88d822b6b89fbb3a03b36b1b472f369ed1" Mar 20 16:34:51 crc kubenswrapper[4730]: I0320 16:34:51.545606 4730 scope.go:117] "RemoveContainer" containerID="0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34" Mar 20 16:34:51 crc kubenswrapper[4730]: E0320 16:34:51.546144 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:35:03 crc kubenswrapper[4730]: I0320 16:35:03.533324 4730 scope.go:117] "RemoveContainer" containerID="0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34" Mar 20 16:35:03 crc kubenswrapper[4730]: E0320 16:35:03.534193 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:35:05 crc kubenswrapper[4730]: I0320 16:35:05.557775 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q2kb5"] Mar 20 16:35:05 crc kubenswrapper[4730]: E0320 16:35:05.558905 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a649324-b73a-44e0-94e5-2b8c54476367" containerName="oc" Mar 20 16:35:05 crc kubenswrapper[4730]: I0320 16:35:05.558922 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a649324-b73a-44e0-94e5-2b8c54476367" containerName="oc" Mar 20 16:35:05 crc kubenswrapper[4730]: I0320 16:35:05.559166 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a649324-b73a-44e0-94e5-2b8c54476367" containerName="oc" Mar 20 16:35:05 crc kubenswrapper[4730]: I0320 16:35:05.561030 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q2kb5" Mar 20 16:35:05 crc kubenswrapper[4730]: I0320 16:35:05.587155 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q2kb5"] Mar 20 16:35:05 crc kubenswrapper[4730]: I0320 16:35:05.631171 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6063e942-1052-4d11-b5d3-22b8a54fac0b-utilities\") pod \"redhat-marketplace-q2kb5\" (UID: \"6063e942-1052-4d11-b5d3-22b8a54fac0b\") " pod="openshift-marketplace/redhat-marketplace-q2kb5" Mar 20 16:35:05 crc kubenswrapper[4730]: I0320 16:35:05.631263 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnwmt\" (UniqueName: \"kubernetes.io/projected/6063e942-1052-4d11-b5d3-22b8a54fac0b-kube-api-access-tnwmt\") pod \"redhat-marketplace-q2kb5\" (UID: \"6063e942-1052-4d11-b5d3-22b8a54fac0b\") " pod="openshift-marketplace/redhat-marketplace-q2kb5" Mar 20 16:35:05 crc kubenswrapper[4730]: I0320 16:35:05.631629 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6063e942-1052-4d11-b5d3-22b8a54fac0b-catalog-content\") pod \"redhat-marketplace-q2kb5\" (UID: \"6063e942-1052-4d11-b5d3-22b8a54fac0b\") " pod="openshift-marketplace/redhat-marketplace-q2kb5" Mar 20 16:35:05 crc kubenswrapper[4730]: I0320 16:35:05.733597 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6063e942-1052-4d11-b5d3-22b8a54fac0b-utilities\") pod \"redhat-marketplace-q2kb5\" (UID: \"6063e942-1052-4d11-b5d3-22b8a54fac0b\") " pod="openshift-marketplace/redhat-marketplace-q2kb5" Mar 20 16:35:05 crc kubenswrapper[4730]: I0320 16:35:05.733669 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnwmt\" (UniqueName: \"kubernetes.io/projected/6063e942-1052-4d11-b5d3-22b8a54fac0b-kube-api-access-tnwmt\") pod \"redhat-marketplace-q2kb5\" (UID: \"6063e942-1052-4d11-b5d3-22b8a54fac0b\") " pod="openshift-marketplace/redhat-marketplace-q2kb5" Mar 20 16:35:05 crc kubenswrapper[4730]: I0320 16:35:05.733924 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6063e942-1052-4d11-b5d3-22b8a54fac0b-catalog-content\") pod \"redhat-marketplace-q2kb5\" (UID: \"6063e942-1052-4d11-b5d3-22b8a54fac0b\") " pod="openshift-marketplace/redhat-marketplace-q2kb5" Mar 20 16:35:05 crc kubenswrapper[4730]: I0320 16:35:05.734116 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6063e942-1052-4d11-b5d3-22b8a54fac0b-utilities\") pod \"redhat-marketplace-q2kb5\" (UID: \"6063e942-1052-4d11-b5d3-22b8a54fac0b\") " pod="openshift-marketplace/redhat-marketplace-q2kb5" Mar 20 16:35:05 crc kubenswrapper[4730]: I0320 16:35:05.734864 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6063e942-1052-4d11-b5d3-22b8a54fac0b-catalog-content\") pod \"redhat-marketplace-q2kb5\" (UID: \"6063e942-1052-4d11-b5d3-22b8a54fac0b\") " pod="openshift-marketplace/redhat-marketplace-q2kb5" Mar 20 16:35:05 crc kubenswrapper[4730]: I0320 16:35:05.757712 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnwmt\" (UniqueName: \"kubernetes.io/projected/6063e942-1052-4d11-b5d3-22b8a54fac0b-kube-api-access-tnwmt\") pod \"redhat-marketplace-q2kb5\" (UID: \"6063e942-1052-4d11-b5d3-22b8a54fac0b\") " pod="openshift-marketplace/redhat-marketplace-q2kb5" Mar 20 16:35:05 crc kubenswrapper[4730]: I0320 16:35:05.882808 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q2kb5" Mar 20 16:35:06 crc kubenswrapper[4730]: I0320 16:35:06.396823 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q2kb5"] Mar 20 16:35:06 crc kubenswrapper[4730]: I0320 16:35:06.413967 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2kb5" event={"ID":"6063e942-1052-4d11-b5d3-22b8a54fac0b","Type":"ContainerStarted","Data":"8dadef0cacc819297d7b74d8157d397280f72d917551fbf30a2a26925d8faa40"} Mar 20 16:35:07 crc kubenswrapper[4730]: I0320 16:35:07.423498 4730 generic.go:334] "Generic (PLEG): container finished" podID="6063e942-1052-4d11-b5d3-22b8a54fac0b" containerID="d7d89caefd3eb9f14a2b8ca66fe393d83a87edc1c07d15102fe40ddac9e511db" exitCode=0 Mar 20 16:35:07 crc kubenswrapper[4730]: I0320 16:35:07.423592 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2kb5" event={"ID":"6063e942-1052-4d11-b5d3-22b8a54fac0b","Type":"ContainerDied","Data":"d7d89caefd3eb9f14a2b8ca66fe393d83a87edc1c07d15102fe40ddac9e511db"} Mar 20 16:35:08 crc kubenswrapper[4730]: I0320 16:35:08.439507 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2kb5" event={"ID":"6063e942-1052-4d11-b5d3-22b8a54fac0b","Type":"ContainerStarted","Data":"fedebce627b5326d3479102ee71d13b0ef2a44bb20a21e8ed4485d42c7214a27"} Mar 20 16:35:10 crc kubenswrapper[4730]: I0320 16:35:10.456804 4730 generic.go:334] "Generic (PLEG): container finished" podID="6063e942-1052-4d11-b5d3-22b8a54fac0b" containerID="fedebce627b5326d3479102ee71d13b0ef2a44bb20a21e8ed4485d42c7214a27" exitCode=0 Mar 20 16:35:10 crc kubenswrapper[4730]: I0320 16:35:10.457000 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2kb5" event={"ID":"6063e942-1052-4d11-b5d3-22b8a54fac0b","Type":"ContainerDied","Data":"fedebce627b5326d3479102ee71d13b0ef2a44bb20a21e8ed4485d42c7214a27"} Mar 20 16:35:11 crc kubenswrapper[4730]: I0320 16:35:11.468678 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2kb5" event={"ID":"6063e942-1052-4d11-b5d3-22b8a54fac0b","Type":"ContainerStarted","Data":"22dc69ffe2787e82cc89a2d123fa9f97367487636c43039a924e4b35f0f2ad2a"} Mar 20 16:35:11 crc kubenswrapper[4730]: I0320 16:35:11.492170 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q2kb5" podStartSLOduration=3.01077823 podStartE2EDuration="6.492149858s" podCreationTimestamp="2026-03-20 16:35:05 +0000 UTC" firstStartedPulling="2026-03-20 16:35:07.426758026 +0000 UTC m=+3366.640129395" lastFinishedPulling="2026-03-20 16:35:10.908129654 +0000 UTC m=+3370.121501023" observedRunningTime="2026-03-20 16:35:11.48692866 +0000 UTC m=+3370.700300029" watchObservedRunningTime="2026-03-20 16:35:11.492149858 +0000 UTC m=+3370.705521227" Mar 20 16:35:15 crc kubenswrapper[4730]: I0320 16:35:15.883386 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q2kb5" Mar 20 16:35:15 crc kubenswrapper[4730]: I0320 16:35:15.884735 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q2kb5" Mar 20 16:35:15 crc kubenswrapper[4730]: I0320 16:35:15.941310 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q2kb5" Mar 20 16:35:16 crc kubenswrapper[4730]: I0320 16:35:16.533771 4730 scope.go:117] "RemoveContainer" containerID="0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34" Mar 20 16:35:16 crc kubenswrapper[4730]: E0320 16:35:16.534184 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:35:16 crc kubenswrapper[4730]: I0320 16:35:16.587401 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q2kb5" Mar 20 16:35:16 crc kubenswrapper[4730]: I0320 16:35:16.653027 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q2kb5"] Mar 20 16:35:18 crc kubenswrapper[4730]: I0320 16:35:18.547752 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q2kb5" podUID="6063e942-1052-4d11-b5d3-22b8a54fac0b" containerName="registry-server" containerID="cri-o://22dc69ffe2787e82cc89a2d123fa9f97367487636c43039a924e4b35f0f2ad2a" gracePeriod=2 Mar 20 16:35:19 crc kubenswrapper[4730]: I0320 16:35:19.031169 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q2kb5" Mar 20 16:35:19 crc kubenswrapper[4730]: I0320 16:35:19.135206 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6063e942-1052-4d11-b5d3-22b8a54fac0b-utilities\") pod \"6063e942-1052-4d11-b5d3-22b8a54fac0b\" (UID: \"6063e942-1052-4d11-b5d3-22b8a54fac0b\") " Mar 20 16:35:19 crc kubenswrapper[4730]: I0320 16:35:19.135448 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnwmt\" (UniqueName: \"kubernetes.io/projected/6063e942-1052-4d11-b5d3-22b8a54fac0b-kube-api-access-tnwmt\") pod \"6063e942-1052-4d11-b5d3-22b8a54fac0b\" (UID: \"6063e942-1052-4d11-b5d3-22b8a54fac0b\") " Mar 20 16:35:19 crc kubenswrapper[4730]: I0320 16:35:19.135468 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6063e942-1052-4d11-b5d3-22b8a54fac0b-catalog-content\") pod \"6063e942-1052-4d11-b5d3-22b8a54fac0b\" (UID: \"6063e942-1052-4d11-b5d3-22b8a54fac0b\") " Mar 20 16:35:19 crc kubenswrapper[4730]: I0320 16:35:19.136081 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6063e942-1052-4d11-b5d3-22b8a54fac0b-utilities" (OuterVolumeSpecName: "utilities") pod "6063e942-1052-4d11-b5d3-22b8a54fac0b" (UID: "6063e942-1052-4d11-b5d3-22b8a54fac0b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:35:19 crc kubenswrapper[4730]: I0320 16:35:19.142625 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6063e942-1052-4d11-b5d3-22b8a54fac0b-kube-api-access-tnwmt" (OuterVolumeSpecName: "kube-api-access-tnwmt") pod "6063e942-1052-4d11-b5d3-22b8a54fac0b" (UID: "6063e942-1052-4d11-b5d3-22b8a54fac0b"). InnerVolumeSpecName "kube-api-access-tnwmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:35:19 crc kubenswrapper[4730]: I0320 16:35:19.169935 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6063e942-1052-4d11-b5d3-22b8a54fac0b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6063e942-1052-4d11-b5d3-22b8a54fac0b" (UID: "6063e942-1052-4d11-b5d3-22b8a54fac0b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:35:19 crc kubenswrapper[4730]: I0320 16:35:19.238087 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6063e942-1052-4d11-b5d3-22b8a54fac0b-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 16:35:19 crc kubenswrapper[4730]: I0320 16:35:19.238148 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6063e942-1052-4d11-b5d3-22b8a54fac0b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 16:35:19 crc kubenswrapper[4730]: I0320 16:35:19.238166 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnwmt\" (UniqueName: \"kubernetes.io/projected/6063e942-1052-4d11-b5d3-22b8a54fac0b-kube-api-access-tnwmt\") on node \"crc\" DevicePath \"\"" Mar 20 16:35:19 crc kubenswrapper[4730]: I0320 16:35:19.564573 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q2kb5" Mar 20 16:35:19 crc kubenswrapper[4730]: I0320 16:35:19.564582 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2kb5" event={"ID":"6063e942-1052-4d11-b5d3-22b8a54fac0b","Type":"ContainerDied","Data":"22dc69ffe2787e82cc89a2d123fa9f97367487636c43039a924e4b35f0f2ad2a"} Mar 20 16:35:19 crc kubenswrapper[4730]: I0320 16:35:19.564641 4730 scope.go:117] "RemoveContainer" containerID="22dc69ffe2787e82cc89a2d123fa9f97367487636c43039a924e4b35f0f2ad2a" Mar 20 16:35:19 crc kubenswrapper[4730]: I0320 16:35:19.564434 4730 generic.go:334] "Generic (PLEG): container finished" podID="6063e942-1052-4d11-b5d3-22b8a54fac0b" containerID="22dc69ffe2787e82cc89a2d123fa9f97367487636c43039a924e4b35f0f2ad2a" exitCode=0 Mar 20 16:35:19 crc kubenswrapper[4730]: I0320 16:35:19.565322 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2kb5" event={"ID":"6063e942-1052-4d11-b5d3-22b8a54fac0b","Type":"ContainerDied","Data":"8dadef0cacc819297d7b74d8157d397280f72d917551fbf30a2a26925d8faa40"} Mar 20 16:35:19 crc kubenswrapper[4730]: I0320 16:35:19.598350 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q2kb5"] Mar 20 16:35:19 crc kubenswrapper[4730]: I0320 16:35:19.602509 4730 scope.go:117] "RemoveContainer" containerID="fedebce627b5326d3479102ee71d13b0ef2a44bb20a21e8ed4485d42c7214a27" Mar 20 16:35:19 crc kubenswrapper[4730]: I0320 16:35:19.607453 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q2kb5"] Mar 20 16:35:19 crc kubenswrapper[4730]: I0320 16:35:19.636570 4730 scope.go:117] "RemoveContainer" containerID="d7d89caefd3eb9f14a2b8ca66fe393d83a87edc1c07d15102fe40ddac9e511db" Mar 20 16:35:19 crc kubenswrapper[4730]: I0320 16:35:19.671644 4730 scope.go:117] "RemoveContainer" containerID="22dc69ffe2787e82cc89a2d123fa9f97367487636c43039a924e4b35f0f2ad2a" Mar 20 16:35:19 crc kubenswrapper[4730]: E0320 16:35:19.672110 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22dc69ffe2787e82cc89a2d123fa9f97367487636c43039a924e4b35f0f2ad2a\": container with ID starting with 22dc69ffe2787e82cc89a2d123fa9f97367487636c43039a924e4b35f0f2ad2a not found: ID does not exist" containerID="22dc69ffe2787e82cc89a2d123fa9f97367487636c43039a924e4b35f0f2ad2a" Mar 20 16:35:19 crc kubenswrapper[4730]: I0320 16:35:19.672161 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22dc69ffe2787e82cc89a2d123fa9f97367487636c43039a924e4b35f0f2ad2a"} err="failed to get container status \"22dc69ffe2787e82cc89a2d123fa9f97367487636c43039a924e4b35f0f2ad2a\": rpc error: code = NotFound desc = could not find container \"22dc69ffe2787e82cc89a2d123fa9f97367487636c43039a924e4b35f0f2ad2a\": container with ID starting with 22dc69ffe2787e82cc89a2d123fa9f97367487636c43039a924e4b35f0f2ad2a not found: ID does not exist" Mar 20 16:35:19 crc kubenswrapper[4730]: I0320 16:35:19.672190 4730 scope.go:117] "RemoveContainer" containerID="fedebce627b5326d3479102ee71d13b0ef2a44bb20a21e8ed4485d42c7214a27" Mar 20 16:35:19 crc kubenswrapper[4730]: E0320 16:35:19.672638 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fedebce627b5326d3479102ee71d13b0ef2a44bb20a21e8ed4485d42c7214a27\": container with ID starting with fedebce627b5326d3479102ee71d13b0ef2a44bb20a21e8ed4485d42c7214a27 not found: ID does not exist" containerID="fedebce627b5326d3479102ee71d13b0ef2a44bb20a21e8ed4485d42c7214a27" Mar 20 16:35:19 crc kubenswrapper[4730]: I0320 16:35:19.672670 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fedebce627b5326d3479102ee71d13b0ef2a44bb20a21e8ed4485d42c7214a27"} err="failed to get container status \"fedebce627b5326d3479102ee71d13b0ef2a44bb20a21e8ed4485d42c7214a27\": rpc error: code = NotFound desc = could not find container \"fedebce627b5326d3479102ee71d13b0ef2a44bb20a21e8ed4485d42c7214a27\": container with ID starting with fedebce627b5326d3479102ee71d13b0ef2a44bb20a21e8ed4485d42c7214a27 not found: ID does not exist" Mar 20 16:35:19 crc kubenswrapper[4730]: I0320 16:35:19.672691 4730 scope.go:117] "RemoveContainer" containerID="d7d89caefd3eb9f14a2b8ca66fe393d83a87edc1c07d15102fe40ddac9e511db" Mar 20 16:35:19 crc kubenswrapper[4730]: E0320 16:35:19.672904 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7d89caefd3eb9f14a2b8ca66fe393d83a87edc1c07d15102fe40ddac9e511db\": container with ID starting with d7d89caefd3eb9f14a2b8ca66fe393d83a87edc1c07d15102fe40ddac9e511db not found: ID does not exist" containerID="d7d89caefd3eb9f14a2b8ca66fe393d83a87edc1c07d15102fe40ddac9e511db" Mar 20 16:35:19 crc kubenswrapper[4730]: I0320 16:35:19.672932 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7d89caefd3eb9f14a2b8ca66fe393d83a87edc1c07d15102fe40ddac9e511db"} err="failed to get container status \"d7d89caefd3eb9f14a2b8ca66fe393d83a87edc1c07d15102fe40ddac9e511db\": rpc error: code = NotFound desc = could not find container \"d7d89caefd3eb9f14a2b8ca66fe393d83a87edc1c07d15102fe40ddac9e511db\": container with ID starting with d7d89caefd3eb9f14a2b8ca66fe393d83a87edc1c07d15102fe40ddac9e511db not found: ID does not exist" Mar 20 16:35:21 crc kubenswrapper[4730]: I0320 16:35:21.561658 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6063e942-1052-4d11-b5d3-22b8a54fac0b" path="/var/lib/kubelet/pods/6063e942-1052-4d11-b5d3-22b8a54fac0b/volumes" Mar 20 16:35:31 crc kubenswrapper[4730]: I0320 16:35:31.540290 4730 scope.go:117] "RemoveContainer" containerID="0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34" Mar 20 16:35:31 crc kubenswrapper[4730]: E0320 16:35:31.541016 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:35:45 crc kubenswrapper[4730]: I0320 16:35:45.534544 4730 scope.go:117] "RemoveContainer" containerID="0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34" Mar 20 16:35:45 crc kubenswrapper[4730]: E0320 16:35:45.535719 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:35:51 crc kubenswrapper[4730]: I0320 16:35:51.253237 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qtb9x"] Mar 20 16:35:51 crc kubenswrapper[4730]: E0320 16:35:51.254261 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6063e942-1052-4d11-b5d3-22b8a54fac0b" containerName="registry-server" Mar 20 16:35:51 crc kubenswrapper[4730]: I0320 16:35:51.254277 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="6063e942-1052-4d11-b5d3-22b8a54fac0b" containerName="registry-server" Mar 20 16:35:51 crc kubenswrapper[4730]: E0320 16:35:51.254324 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6063e942-1052-4d11-b5d3-22b8a54fac0b" containerName="extract-content" Mar 20 16:35:51 crc kubenswrapper[4730]: I0320 16:35:51.254333 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="6063e942-1052-4d11-b5d3-22b8a54fac0b" containerName="extract-content" Mar 20 16:35:51 crc kubenswrapper[4730]: E0320 16:35:51.254350 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6063e942-1052-4d11-b5d3-22b8a54fac0b" containerName="extract-utilities" Mar 20 16:35:51 crc kubenswrapper[4730]: I0320 16:35:51.254359 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="6063e942-1052-4d11-b5d3-22b8a54fac0b" containerName="extract-utilities" Mar 20 16:35:51 crc kubenswrapper[4730]: I0320 16:35:51.254600 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="6063e942-1052-4d11-b5d3-22b8a54fac0b" containerName="registry-server" Mar 20 16:35:51 crc kubenswrapper[4730]: I0320 16:35:51.256428 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qtb9x" Mar 20 16:35:51 crc kubenswrapper[4730]: I0320 16:35:51.277752 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qtb9x"] Mar 20 16:35:51 crc kubenswrapper[4730]: I0320 16:35:51.318589 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccc2048d-115d-4872-a28e-0a34a552b5fc-utilities\") pod \"community-operators-qtb9x\" (UID: \"ccc2048d-115d-4872-a28e-0a34a552b5fc\") " pod="openshift-marketplace/community-operators-qtb9x" Mar 20 16:35:51 crc kubenswrapper[4730]: I0320 16:35:51.318694 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsh5z\" (UniqueName: \"kubernetes.io/projected/ccc2048d-115d-4872-a28e-0a34a552b5fc-kube-api-access-lsh5z\") pod \"community-operators-qtb9x\" (UID: \"ccc2048d-115d-4872-a28e-0a34a552b5fc\") " pod="openshift-marketplace/community-operators-qtb9x" Mar 20 16:35:51 crc kubenswrapper[4730]: I0320 16:35:51.318857 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccc2048d-115d-4872-a28e-0a34a552b5fc-catalog-content\") pod \"community-operators-qtb9x\" (UID: \"ccc2048d-115d-4872-a28e-0a34a552b5fc\") " pod="openshift-marketplace/community-operators-qtb9x" Mar 20 16:35:51 crc kubenswrapper[4730]: I0320 16:35:51.420571 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccc2048d-115d-4872-a28e-0a34a552b5fc-catalog-content\") pod \"community-operators-qtb9x\" (UID: \"ccc2048d-115d-4872-a28e-0a34a552b5fc\") " pod="openshift-marketplace/community-operators-qtb9x" Mar 20 16:35:51 crc kubenswrapper[4730]: I0320 16:35:51.420643 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccc2048d-115d-4872-a28e-0a34a552b5fc-utilities\") pod \"community-operators-qtb9x\" (UID: \"ccc2048d-115d-4872-a28e-0a34a552b5fc\") " pod="openshift-marketplace/community-operators-qtb9x" Mar 20 16:35:51 crc kubenswrapper[4730]: I0320 16:35:51.420717 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsh5z\" (UniqueName: \"kubernetes.io/projected/ccc2048d-115d-4872-a28e-0a34a552b5fc-kube-api-access-lsh5z\") pod \"community-operators-qtb9x\" (UID: \"ccc2048d-115d-4872-a28e-0a34a552b5fc\") " pod="openshift-marketplace/community-operators-qtb9x" Mar 20 16:35:51 crc kubenswrapper[4730]: I0320 16:35:51.421191 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccc2048d-115d-4872-a28e-0a34a552b5fc-catalog-content\") pod \"community-operators-qtb9x\" (UID: \"ccc2048d-115d-4872-a28e-0a34a552b5fc\") " pod="openshift-marketplace/community-operators-qtb9x" Mar 20 16:35:51 crc kubenswrapper[4730]: I0320 16:35:51.421191 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccc2048d-115d-4872-a28e-0a34a552b5fc-utilities\") pod \"community-operators-qtb9x\" (UID: \"ccc2048d-115d-4872-a28e-0a34a552b5fc\") " pod="openshift-marketplace/community-operators-qtb9x" Mar 20 16:35:51 crc kubenswrapper[4730]: I0320 16:35:51.453096 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsh5z\" (UniqueName: \"kubernetes.io/projected/ccc2048d-115d-4872-a28e-0a34a552b5fc-kube-api-access-lsh5z\") pod \"community-operators-qtb9x\" (UID: \"ccc2048d-115d-4872-a28e-0a34a552b5fc\") " pod="openshift-marketplace/community-operators-qtb9x" Mar 20 16:35:51 crc kubenswrapper[4730]: I0320 16:35:51.602278 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qtb9x" Mar 20 16:35:52 crc kubenswrapper[4730]: I0320 16:35:52.112583 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qtb9x"] Mar 20 16:35:52 crc kubenswrapper[4730]: I0320 16:35:52.871212 4730 generic.go:334] "Generic (PLEG): container finished" podID="ccc2048d-115d-4872-a28e-0a34a552b5fc" containerID="0f0d2d1d59884c6abc7a55b255dabe9f1d0344a1ac94f3f4341815bd4338056b" exitCode=0 Mar 20 16:35:52 crc kubenswrapper[4730]: I0320 16:35:52.871535 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtb9x" event={"ID":"ccc2048d-115d-4872-a28e-0a34a552b5fc","Type":"ContainerDied","Data":"0f0d2d1d59884c6abc7a55b255dabe9f1d0344a1ac94f3f4341815bd4338056b"} Mar 20 16:35:52 crc kubenswrapper[4730]: I0320 16:35:52.872433 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtb9x" event={"ID":"ccc2048d-115d-4872-a28e-0a34a552b5fc","Type":"ContainerStarted","Data":"eff11742f0e763d24178c93c7f26dcee77c92125f068d32fc5f5646326a8fc8b"} Mar 20 16:35:54 crc kubenswrapper[4730]: I0320 16:35:54.894957 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtb9x" event={"ID":"ccc2048d-115d-4872-a28e-0a34a552b5fc","Type":"ContainerStarted","Data":"288dd475a1bf708e43583f729e6d81820e0c88b93d5fa513bc9fc1acb9234d16"} Mar 20 16:35:56 crc kubenswrapper[4730]: I0320 16:35:56.533942 4730 scope.go:117] "RemoveContainer" containerID="0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34" Mar 20 16:35:56 crc kubenswrapper[4730]: E0320 16:35:56.534554 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:35:56 crc kubenswrapper[4730]: I0320 16:35:56.913999 4730 generic.go:334] "Generic (PLEG): container finished" podID="ccc2048d-115d-4872-a28e-0a34a552b5fc" containerID="288dd475a1bf708e43583f729e6d81820e0c88b93d5fa513bc9fc1acb9234d16" exitCode=0 Mar 20 16:35:56 crc kubenswrapper[4730]: I0320 16:35:56.914049 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtb9x" event={"ID":"ccc2048d-115d-4872-a28e-0a34a552b5fc","Type":"ContainerDied","Data":"288dd475a1bf708e43583f729e6d81820e0c88b93d5fa513bc9fc1acb9234d16"} Mar 20 16:35:57 crc kubenswrapper[4730]: I0320 16:35:57.926142 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtb9x" event={"ID":"ccc2048d-115d-4872-a28e-0a34a552b5fc","Type":"ContainerStarted","Data":"1ea60d900e9ef26ef6983c268a7de372b9ed4b12de5cec649b6cba7470812fe5"} Mar 20 16:35:57 crc kubenswrapper[4730]: I0320 16:35:57.950801 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qtb9x" podStartSLOduration=2.212236401 podStartE2EDuration="6.950779153s" podCreationTimestamp="2026-03-20 16:35:51 +0000 UTC" firstStartedPulling="2026-03-20 16:35:52.875112506 +0000 UTC m=+3412.088483875" lastFinishedPulling="2026-03-20 16:35:57.613655258 +0000 UTC m=+3416.827026627" observedRunningTime="2026-03-20 16:35:57.943193428 +0000 UTC m=+3417.156564807" watchObservedRunningTime="2026-03-20 16:35:57.950779153 +0000 UTC m=+3417.164150522" Mar 20 16:36:00 crc kubenswrapper[4730]: I0320 16:36:00.144141 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567076-5zjgq"] Mar 20 16:36:00 crc kubenswrapper[4730]: I0320 16:36:00.146097 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567076-5zjgq" Mar 20 16:36:00 crc kubenswrapper[4730]: I0320 16:36:00.148067 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:36:00 crc kubenswrapper[4730]: I0320 16:36:00.148298 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:36:00 crc kubenswrapper[4730]: I0320 16:36:00.150765 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:36:00 crc kubenswrapper[4730]: I0320 16:36:00.154959 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567076-5zjgq"] Mar 20 16:36:00 crc kubenswrapper[4730]: I0320 16:36:00.315648 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xsjd\" (UniqueName: \"kubernetes.io/projected/bcb7d22d-4ad0-4a9c-bf00-e966d1abb051-kube-api-access-9xsjd\") pod \"auto-csr-approver-29567076-5zjgq\" (UID: \"bcb7d22d-4ad0-4a9c-bf00-e966d1abb051\") " pod="openshift-infra/auto-csr-approver-29567076-5zjgq" Mar 20 16:36:00 crc kubenswrapper[4730]: I0320 16:36:00.418102 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xsjd\" (UniqueName: \"kubernetes.io/projected/bcb7d22d-4ad0-4a9c-bf00-e966d1abb051-kube-api-access-9xsjd\") pod \"auto-csr-approver-29567076-5zjgq\" (UID: \"bcb7d22d-4ad0-4a9c-bf00-e966d1abb051\") " pod="openshift-infra/auto-csr-approver-29567076-5zjgq" Mar 20 16:36:00 crc kubenswrapper[4730]: I0320 16:36:00.443808 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xsjd\" (UniqueName: \"kubernetes.io/projected/bcb7d22d-4ad0-4a9c-bf00-e966d1abb051-kube-api-access-9xsjd\") pod \"auto-csr-approver-29567076-5zjgq\" (UID: \"bcb7d22d-4ad0-4a9c-bf00-e966d1abb051\") " pod="openshift-infra/auto-csr-approver-29567076-5zjgq" Mar 20 16:36:00 crc kubenswrapper[4730]: I0320 16:36:00.468891 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567076-5zjgq" Mar 20 16:36:00 crc kubenswrapper[4730]: I0320 16:36:00.943077 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567076-5zjgq"] Mar 20 16:36:00 crc kubenswrapper[4730]: I0320 16:36:00.969704 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567076-5zjgq" event={"ID":"bcb7d22d-4ad0-4a9c-bf00-e966d1abb051","Type":"ContainerStarted","Data":"90c9a5f2489ea935ff20be9876decd344799b1f14ecda31d27a44ae5ad46b793"} Mar 20 16:36:01 crc kubenswrapper[4730]: I0320 16:36:01.602585 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qtb9x" Mar 20 16:36:01 crc kubenswrapper[4730]: I0320 16:36:01.602634 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qtb9x" Mar 20 16:36:02 crc kubenswrapper[4730]: I0320 16:36:02.660994 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-qtb9x" podUID="ccc2048d-115d-4872-a28e-0a34a552b5fc" containerName="registry-server" probeResult="failure" output=< Mar 20 16:36:02 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Mar 20 16:36:02 crc kubenswrapper[4730]: > Mar 20 16:36:02 crc kubenswrapper[4730]: I0320 16:36:02.996261 4730 generic.go:334] "Generic (PLEG): container finished" podID="bcb7d22d-4ad0-4a9c-bf00-e966d1abb051" containerID="7abe93567f97d011f8ae053e88185c6004136b63e3d5f72b19beb707014bf434" exitCode=0 Mar 20 16:36:02 crc kubenswrapper[4730]: I0320 16:36:02.996743 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567076-5zjgq" event={"ID":"bcb7d22d-4ad0-4a9c-bf00-e966d1abb051","Type":"ContainerDied","Data":"7abe93567f97d011f8ae053e88185c6004136b63e3d5f72b19beb707014bf434"} Mar 20 16:36:04 crc kubenswrapper[4730]: I0320 16:36:04.426773 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567076-5zjgq" Mar 20 16:36:04 crc kubenswrapper[4730]: I0320 16:36:04.530664 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xsjd\" (UniqueName: \"kubernetes.io/projected/bcb7d22d-4ad0-4a9c-bf00-e966d1abb051-kube-api-access-9xsjd\") pod \"bcb7d22d-4ad0-4a9c-bf00-e966d1abb051\" (UID: \"bcb7d22d-4ad0-4a9c-bf00-e966d1abb051\") " Mar 20 16:36:04 crc kubenswrapper[4730]: I0320 16:36:04.541574 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcb7d22d-4ad0-4a9c-bf00-e966d1abb051-kube-api-access-9xsjd" (OuterVolumeSpecName: "kube-api-access-9xsjd") pod "bcb7d22d-4ad0-4a9c-bf00-e966d1abb051" (UID: "bcb7d22d-4ad0-4a9c-bf00-e966d1abb051"). InnerVolumeSpecName "kube-api-access-9xsjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:36:04 crc kubenswrapper[4730]: I0320 16:36:04.634002 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xsjd\" (UniqueName: \"kubernetes.io/projected/bcb7d22d-4ad0-4a9c-bf00-e966d1abb051-kube-api-access-9xsjd\") on node \"crc\" DevicePath \"\"" Mar 20 16:36:05 crc kubenswrapper[4730]: I0320 16:36:05.024855 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567076-5zjgq" event={"ID":"bcb7d22d-4ad0-4a9c-bf00-e966d1abb051","Type":"ContainerDied","Data":"90c9a5f2489ea935ff20be9876decd344799b1f14ecda31d27a44ae5ad46b793"} Mar 20 16:36:05 crc kubenswrapper[4730]: I0320 16:36:05.025097 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90c9a5f2489ea935ff20be9876decd344799b1f14ecda31d27a44ae5ad46b793" Mar 20 16:36:05 crc kubenswrapper[4730]: I0320 16:36:05.024900 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567076-5zjgq" Mar 20 16:36:05 crc kubenswrapper[4730]: I0320 16:36:05.531213 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567070-h8qb5"] Mar 20 16:36:05 crc kubenswrapper[4730]: I0320 16:36:05.543926 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567070-h8qb5"] Mar 20 16:36:07 crc kubenswrapper[4730]: I0320 16:36:07.544589 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f50e1094-eded-4000-b7f3-29722d8ba695" path="/var/lib/kubelet/pods/f50e1094-eded-4000-b7f3-29722d8ba695/volumes" Mar 20 16:36:09 crc kubenswrapper[4730]: I0320 16:36:09.537323 4730 scope.go:117] "RemoveContainer" containerID="0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34" Mar 20 16:36:09 crc kubenswrapper[4730]: E0320 16:36:09.538310 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:36:11 crc kubenswrapper[4730]: I0320 16:36:11.651693 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qtb9x" Mar 20 16:36:11 crc kubenswrapper[4730]: I0320 16:36:11.703790 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qtb9x" Mar 20 16:36:11 crc kubenswrapper[4730]: I0320 16:36:11.887663 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qtb9x"] Mar 20 16:36:13 crc kubenswrapper[4730]: I0320 16:36:13.137373 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qtb9x" podUID="ccc2048d-115d-4872-a28e-0a34a552b5fc" containerName="registry-server" containerID="cri-o://1ea60d900e9ef26ef6983c268a7de372b9ed4b12de5cec649b6cba7470812fe5" gracePeriod=2 Mar 20 16:36:13 crc kubenswrapper[4730]: I0320 16:36:13.692391 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qtb9x" Mar 20 16:36:13 crc kubenswrapper[4730]: I0320 16:36:13.707789 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccc2048d-115d-4872-a28e-0a34a552b5fc-utilities\") pod \"ccc2048d-115d-4872-a28e-0a34a552b5fc\" (UID: \"ccc2048d-115d-4872-a28e-0a34a552b5fc\") " Mar 20 16:36:13 crc kubenswrapper[4730]: I0320 16:36:13.709179 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccc2048d-115d-4872-a28e-0a34a552b5fc-utilities" (OuterVolumeSpecName: "utilities") pod "ccc2048d-115d-4872-a28e-0a34a552b5fc" (UID: "ccc2048d-115d-4872-a28e-0a34a552b5fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:36:13 crc kubenswrapper[4730]: I0320 16:36:13.712113 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsh5z\" (UniqueName: \"kubernetes.io/projected/ccc2048d-115d-4872-a28e-0a34a552b5fc-kube-api-access-lsh5z\") pod \"ccc2048d-115d-4872-a28e-0a34a552b5fc\" (UID: \"ccc2048d-115d-4872-a28e-0a34a552b5fc\") " Mar 20 16:36:13 crc kubenswrapper[4730]: I0320 16:36:13.712885 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccc2048d-115d-4872-a28e-0a34a552b5fc-catalog-content\") pod \"ccc2048d-115d-4872-a28e-0a34a552b5fc\" (UID: \"ccc2048d-115d-4872-a28e-0a34a552b5fc\") " Mar 20 16:36:13 crc kubenswrapper[4730]: I0320 16:36:13.714485 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccc2048d-115d-4872-a28e-0a34a552b5fc-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 16:36:13 crc kubenswrapper[4730]: I0320 16:36:13.722102 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccc2048d-115d-4872-a28e-0a34a552b5fc-kube-api-access-lsh5z" (OuterVolumeSpecName: "kube-api-access-lsh5z") pod "ccc2048d-115d-4872-a28e-0a34a552b5fc" (UID: "ccc2048d-115d-4872-a28e-0a34a552b5fc"). InnerVolumeSpecName "kube-api-access-lsh5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:36:13 crc kubenswrapper[4730]: I0320 16:36:13.778173 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccc2048d-115d-4872-a28e-0a34a552b5fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ccc2048d-115d-4872-a28e-0a34a552b5fc" (UID: "ccc2048d-115d-4872-a28e-0a34a552b5fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:36:13 crc kubenswrapper[4730]: I0320 16:36:13.816142 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccc2048d-115d-4872-a28e-0a34a552b5fc-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 16:36:13 crc kubenswrapper[4730]: I0320 16:36:13.816191 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsh5z\" (UniqueName: \"kubernetes.io/projected/ccc2048d-115d-4872-a28e-0a34a552b5fc-kube-api-access-lsh5z\") on node \"crc\" DevicePath \"\"" Mar 20 16:36:14 crc kubenswrapper[4730]: I0320 16:36:14.159535 4730 generic.go:334] "Generic (PLEG): container finished" podID="ccc2048d-115d-4872-a28e-0a34a552b5fc" containerID="1ea60d900e9ef26ef6983c268a7de372b9ed4b12de5cec649b6cba7470812fe5" exitCode=0 Mar 20 16:36:14 crc kubenswrapper[4730]: I0320 16:36:14.159596 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtb9x" event={"ID":"ccc2048d-115d-4872-a28e-0a34a552b5fc","Type":"ContainerDied","Data":"1ea60d900e9ef26ef6983c268a7de372b9ed4b12de5cec649b6cba7470812fe5"} Mar 20 16:36:14 crc kubenswrapper[4730]: I0320 16:36:14.159628 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtb9x" event={"ID":"ccc2048d-115d-4872-a28e-0a34a552b5fc","Type":"ContainerDied","Data":"eff11742f0e763d24178c93c7f26dcee77c92125f068d32fc5f5646326a8fc8b"} Mar 20 16:36:14 crc kubenswrapper[4730]: I0320 16:36:14.159647 4730 scope.go:117] "RemoveContainer" containerID="1ea60d900e9ef26ef6983c268a7de372b9ed4b12de5cec649b6cba7470812fe5" Mar 20 16:36:14 crc kubenswrapper[4730]: I0320 16:36:14.159817 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qtb9x" Mar 20 16:36:14 crc kubenswrapper[4730]: I0320 16:36:14.198169 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qtb9x"] Mar 20 16:36:14 crc kubenswrapper[4730]: I0320 16:36:14.200540 4730 scope.go:117] "RemoveContainer" containerID="288dd475a1bf708e43583f729e6d81820e0c88b93d5fa513bc9fc1acb9234d16" Mar 20 16:36:14 crc kubenswrapper[4730]: I0320 16:36:14.207838 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qtb9x"] Mar 20 16:36:14 crc kubenswrapper[4730]: I0320 16:36:14.228307 4730 scope.go:117] "RemoveContainer" containerID="0f0d2d1d59884c6abc7a55b255dabe9f1d0344a1ac94f3f4341815bd4338056b" Mar 20 16:36:14 crc kubenswrapper[4730]: I0320 16:36:14.276657 4730 scope.go:117] "RemoveContainer" containerID="1ea60d900e9ef26ef6983c268a7de372b9ed4b12de5cec649b6cba7470812fe5" Mar 20 16:36:14 crc kubenswrapper[4730]: E0320 16:36:14.277129 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ea60d900e9ef26ef6983c268a7de372b9ed4b12de5cec649b6cba7470812fe5\": container with ID starting with 1ea60d900e9ef26ef6983c268a7de372b9ed4b12de5cec649b6cba7470812fe5 not found: ID does not exist" containerID="1ea60d900e9ef26ef6983c268a7de372b9ed4b12de5cec649b6cba7470812fe5" Mar 20 16:36:14 crc kubenswrapper[4730]: I0320 16:36:14.277171 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ea60d900e9ef26ef6983c268a7de372b9ed4b12de5cec649b6cba7470812fe5"} err="failed to get container status \"1ea60d900e9ef26ef6983c268a7de372b9ed4b12de5cec649b6cba7470812fe5\": rpc error: code = NotFound desc = could not find container \"1ea60d900e9ef26ef6983c268a7de372b9ed4b12de5cec649b6cba7470812fe5\": container with ID starting with 1ea60d900e9ef26ef6983c268a7de372b9ed4b12de5cec649b6cba7470812fe5 not found: ID does not exist" Mar 20 16:36:14 crc kubenswrapper[4730]: I0320 16:36:14.277201 4730 scope.go:117] "RemoveContainer" containerID="288dd475a1bf708e43583f729e6d81820e0c88b93d5fa513bc9fc1acb9234d16" Mar 20 16:36:14 crc kubenswrapper[4730]: E0320 16:36:14.277554 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"288dd475a1bf708e43583f729e6d81820e0c88b93d5fa513bc9fc1acb9234d16\": container with ID starting with 288dd475a1bf708e43583f729e6d81820e0c88b93d5fa513bc9fc1acb9234d16 not found: ID does not exist" containerID="288dd475a1bf708e43583f729e6d81820e0c88b93d5fa513bc9fc1acb9234d16" Mar 20 16:36:14 crc kubenswrapper[4730]: I0320 16:36:14.277595 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"288dd475a1bf708e43583f729e6d81820e0c88b93d5fa513bc9fc1acb9234d16"} err="failed to get container status \"288dd475a1bf708e43583f729e6d81820e0c88b93d5fa513bc9fc1acb9234d16\": rpc error: code = NotFound desc = could not find container \"288dd475a1bf708e43583f729e6d81820e0c88b93d5fa513bc9fc1acb9234d16\": container with ID starting with 288dd475a1bf708e43583f729e6d81820e0c88b93d5fa513bc9fc1acb9234d16 not found: ID does not exist" Mar 20 16:36:14 crc kubenswrapper[4730]: I0320 16:36:14.277628 4730 scope.go:117] "RemoveContainer" containerID="0f0d2d1d59884c6abc7a55b255dabe9f1d0344a1ac94f3f4341815bd4338056b" Mar 20 16:36:14 crc kubenswrapper[4730]: E0320 16:36:14.277930 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f0d2d1d59884c6abc7a55b255dabe9f1d0344a1ac94f3f4341815bd4338056b\": container with ID starting with 0f0d2d1d59884c6abc7a55b255dabe9f1d0344a1ac94f3f4341815bd4338056b not found: ID does not exist" containerID="0f0d2d1d59884c6abc7a55b255dabe9f1d0344a1ac94f3f4341815bd4338056b" Mar 20 16:36:14 crc kubenswrapper[4730]: I0320 16:36:14.277962 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f0d2d1d59884c6abc7a55b255dabe9f1d0344a1ac94f3f4341815bd4338056b"} err="failed to get container status \"0f0d2d1d59884c6abc7a55b255dabe9f1d0344a1ac94f3f4341815bd4338056b\": rpc error: code = NotFound desc = could not find container \"0f0d2d1d59884c6abc7a55b255dabe9f1d0344a1ac94f3f4341815bd4338056b\": container with ID starting with 0f0d2d1d59884c6abc7a55b255dabe9f1d0344a1ac94f3f4341815bd4338056b not found: ID does not exist" Mar 20 16:36:15 crc kubenswrapper[4730]: I0320 16:36:15.544863 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccc2048d-115d-4872-a28e-0a34a552b5fc" path="/var/lib/kubelet/pods/ccc2048d-115d-4872-a28e-0a34a552b5fc/volumes" Mar 20 16:36:24 crc kubenswrapper[4730]: I0320 16:36:24.533360 4730 scope.go:117] "RemoveContainer" containerID="0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34" Mar 20 16:36:24 crc kubenswrapper[4730]: E0320 16:36:24.534322 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:36:36 crc kubenswrapper[4730]: I0320 16:36:36.533029 4730 scope.go:117] "RemoveContainer" containerID="0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34" Mar 20 16:36:36 crc kubenswrapper[4730]: E0320 16:36:36.533843 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:36:48 crc kubenswrapper[4730]: I0320 16:36:48.532750 4730 scope.go:117] "RemoveContainer" containerID="0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34" Mar 20 16:36:48 crc kubenswrapper[4730]: E0320 16:36:48.533411 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:36:50 crc kubenswrapper[4730]: I0320 16:36:50.665595 4730 scope.go:117] "RemoveContainer" containerID="20c367b1f4c39cf9e28d8318713966c9d59ce69c25b518b2e48b38d0f034fa5d" Mar 20 16:37:03 crc kubenswrapper[4730]: I0320 16:37:03.533885 4730 scope.go:117] "RemoveContainer" containerID="0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34" Mar 20 16:37:03 crc kubenswrapper[4730]: E0320 16:37:03.535147 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:37:15 crc kubenswrapper[4730]: I0320 16:37:15.533595 4730 scope.go:117] "RemoveContainer" containerID="0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34" Mar 20 16:37:15 crc kubenswrapper[4730]: E0320 16:37:15.534369 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:37:28 crc kubenswrapper[4730]: I0320 16:37:28.534127 4730 scope.go:117] "RemoveContainer" containerID="0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34" Mar 20 16:37:28 crc kubenswrapper[4730]: E0320 16:37:28.536017 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:37:40 crc kubenswrapper[4730]: I0320 16:37:40.533129 4730 scope.go:117] "RemoveContainer" containerID="0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34" Mar 20 16:37:40 crc kubenswrapper[4730]: E0320 16:37:40.533935 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:37:51 crc kubenswrapper[4730]: I0320 16:37:51.541810 4730 scope.go:117] "RemoveContainer" containerID="0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34" Mar 20 16:37:52 crc kubenswrapper[4730]: I0320 16:37:52.045349 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerStarted","Data":"941dc6e58516657b43df3ac7120bf6da060d2f3b3a6c41da62694a0c2e80f6c6"} Mar 20 16:38:00 crc kubenswrapper[4730]: I0320 16:38:00.159887 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567078-bsnqc"] Mar 20 16:38:00 crc kubenswrapper[4730]: E0320 16:38:00.161188 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb7d22d-4ad0-4a9c-bf00-e966d1abb051" containerName="oc" Mar 20 16:38:00 crc kubenswrapper[4730]: I0320 16:38:00.161211 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb7d22d-4ad0-4a9c-bf00-e966d1abb051" containerName="oc" Mar 20 16:38:00 crc kubenswrapper[4730]: E0320 16:38:00.161244 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccc2048d-115d-4872-a28e-0a34a552b5fc" containerName="extract-utilities" Mar 20 16:38:00 crc kubenswrapper[4730]: I0320 16:38:00.161278 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccc2048d-115d-4872-a28e-0a34a552b5fc" containerName="extract-utilities" Mar 20 16:38:00 crc kubenswrapper[4730]: E0320 16:38:00.161299 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccc2048d-115d-4872-a28e-0a34a552b5fc" containerName="registry-server" Mar 20 16:38:00 crc kubenswrapper[4730]: I0320 16:38:00.161311 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccc2048d-115d-4872-a28e-0a34a552b5fc" containerName="registry-server" Mar 20 16:38:00 crc kubenswrapper[4730]: E0320 16:38:00.161327 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccc2048d-115d-4872-a28e-0a34a552b5fc" containerName="extract-content" Mar 20 16:38:00 crc kubenswrapper[4730]: I0320 16:38:00.161336 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccc2048d-115d-4872-a28e-0a34a552b5fc" containerName="extract-content" Mar 20 16:38:00 crc kubenswrapper[4730]: I0320 16:38:00.161597 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcb7d22d-4ad0-4a9c-bf00-e966d1abb051" containerName="oc" Mar 20 16:38:00 crc kubenswrapper[4730]: I0320 16:38:00.161622 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccc2048d-115d-4872-a28e-0a34a552b5fc" containerName="registry-server" Mar 20 16:38:00 crc kubenswrapper[4730]: I0320 16:38:00.162554 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567078-bsnqc" Mar 20 16:38:00 crc kubenswrapper[4730]: I0320 16:38:00.165582 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:38:00 crc kubenswrapper[4730]: I0320 16:38:00.172862 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:38:00 crc kubenswrapper[4730]: I0320 16:38:00.173091 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:38:00 crc kubenswrapper[4730]: I0320 16:38:00.175841 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567078-bsnqc"] Mar 20 16:38:00 crc kubenswrapper[4730]: I0320 16:38:00.222140 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6kk4\" (UniqueName: \"kubernetes.io/projected/7178ffb9-4891-485e-b3d6-d7fcc8f22ef4-kube-api-access-m6kk4\") pod \"auto-csr-approver-29567078-bsnqc\" (UID: \"7178ffb9-4891-485e-b3d6-d7fcc8f22ef4\") " pod="openshift-infra/auto-csr-approver-29567078-bsnqc" Mar 20 16:38:00 crc kubenswrapper[4730]: I0320 16:38:00.324853 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6kk4\" (UniqueName: \"kubernetes.io/projected/7178ffb9-4891-485e-b3d6-d7fcc8f22ef4-kube-api-access-m6kk4\") pod \"auto-csr-approver-29567078-bsnqc\" (UID: \"7178ffb9-4891-485e-b3d6-d7fcc8f22ef4\") " pod="openshift-infra/auto-csr-approver-29567078-bsnqc" Mar 20 16:38:00 crc kubenswrapper[4730]: I0320 16:38:00.343532 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6kk4\" (UniqueName: \"kubernetes.io/projected/7178ffb9-4891-485e-b3d6-d7fcc8f22ef4-kube-api-access-m6kk4\") pod \"auto-csr-approver-29567078-bsnqc\" (UID: \"7178ffb9-4891-485e-b3d6-d7fcc8f22ef4\") " pod="openshift-infra/auto-csr-approver-29567078-bsnqc" Mar 20 16:38:00 crc kubenswrapper[4730]: I0320 16:38:00.485191 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567078-bsnqc" Mar 20 16:38:00 crc kubenswrapper[4730]: I0320 16:38:00.933408 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567078-bsnqc"] Mar 20 16:38:00 crc kubenswrapper[4730]: W0320 16:38:00.935547 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7178ffb9_4891_485e_b3d6_d7fcc8f22ef4.slice/crio-a3a07fe20e20d11ec568a974c1ea93616cd7badad1aece21767ed895b9bed784 WatchSource:0}: Error finding container a3a07fe20e20d11ec568a974c1ea93616cd7badad1aece21767ed895b9bed784: Status 404 returned error can't find the container with id a3a07fe20e20d11ec568a974c1ea93616cd7badad1aece21767ed895b9bed784 Mar 20 16:38:01 crc kubenswrapper[4730]: I0320 16:38:01.128478 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567078-bsnqc" event={"ID":"7178ffb9-4891-485e-b3d6-d7fcc8f22ef4","Type":"ContainerStarted","Data":"a3a07fe20e20d11ec568a974c1ea93616cd7badad1aece21767ed895b9bed784"} Mar 20 16:38:02 crc kubenswrapper[4730]: I0320 16:38:02.146044 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567078-bsnqc" event={"ID":"7178ffb9-4891-485e-b3d6-d7fcc8f22ef4","Type":"ContainerStarted","Data":"3b2c3b7b49826d995d5d767d6f87b9a288d68f48a6bee9638eee715a068be2d7"} Mar 20 16:38:02 crc kubenswrapper[4730]: I0320 16:38:02.165717 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29567078-bsnqc" podStartSLOduration=1.251741025 podStartE2EDuration="2.165698402s" podCreationTimestamp="2026-03-20 16:38:00 +0000 UTC" firstStartedPulling="2026-03-20 16:38:00.938334738 +0000 UTC m=+3540.151706107" lastFinishedPulling="2026-03-20 16:38:01.852292125 +0000 UTC m=+3541.065663484" observedRunningTime="2026-03-20 16:38:02.158963702 +0000 UTC m=+3541.372335081" watchObservedRunningTime="2026-03-20 16:38:02.165698402 +0000 UTC m=+3541.379069771" Mar 20 16:38:03 crc kubenswrapper[4730]: I0320 16:38:03.158805 4730 generic.go:334] "Generic (PLEG): container finished" podID="7178ffb9-4891-485e-b3d6-d7fcc8f22ef4" containerID="3b2c3b7b49826d995d5d767d6f87b9a288d68f48a6bee9638eee715a068be2d7" exitCode=0 Mar 20 16:38:03 crc kubenswrapper[4730]: I0320 16:38:03.158866 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567078-bsnqc" event={"ID":"7178ffb9-4891-485e-b3d6-d7fcc8f22ef4","Type":"ContainerDied","Data":"3b2c3b7b49826d995d5d767d6f87b9a288d68f48a6bee9638eee715a068be2d7"} Mar 20 16:38:04 crc kubenswrapper[4730]: I0320 16:38:04.562061 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567078-bsnqc" Mar 20 16:38:04 crc kubenswrapper[4730]: I0320 16:38:04.614649 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6kk4\" (UniqueName: \"kubernetes.io/projected/7178ffb9-4891-485e-b3d6-d7fcc8f22ef4-kube-api-access-m6kk4\") pod \"7178ffb9-4891-485e-b3d6-d7fcc8f22ef4\" (UID: \"7178ffb9-4891-485e-b3d6-d7fcc8f22ef4\") " Mar 20 16:38:04 crc kubenswrapper[4730]: I0320 16:38:04.621430 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7178ffb9-4891-485e-b3d6-d7fcc8f22ef4-kube-api-access-m6kk4" (OuterVolumeSpecName: "kube-api-access-m6kk4") pod "7178ffb9-4891-485e-b3d6-d7fcc8f22ef4" (UID: "7178ffb9-4891-485e-b3d6-d7fcc8f22ef4"). InnerVolumeSpecName "kube-api-access-m6kk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:38:04 crc kubenswrapper[4730]: I0320 16:38:04.718239 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6kk4\" (UniqueName: \"kubernetes.io/projected/7178ffb9-4891-485e-b3d6-d7fcc8f22ef4-kube-api-access-m6kk4\") on node \"crc\" DevicePath \"\"" Mar 20 16:38:05 crc kubenswrapper[4730]: I0320 16:38:05.181798 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567078-bsnqc" event={"ID":"7178ffb9-4891-485e-b3d6-d7fcc8f22ef4","Type":"ContainerDied","Data":"a3a07fe20e20d11ec568a974c1ea93616cd7badad1aece21767ed895b9bed784"} Mar 20 16:38:05 crc kubenswrapper[4730]: I0320 16:38:05.181838 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3a07fe20e20d11ec568a974c1ea93616cd7badad1aece21767ed895b9bed784" Mar 20 16:38:05 crc kubenswrapper[4730]: I0320 16:38:05.181880 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567078-bsnqc" Mar 20 16:38:05 crc kubenswrapper[4730]: I0320 16:38:05.650324 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567072-jzsw2"] Mar 20 16:38:05 crc kubenswrapper[4730]: I0320 16:38:05.662270 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567072-jzsw2"] Mar 20 16:38:07 crc kubenswrapper[4730]: I0320 16:38:07.603975 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cadf5c48-6db4-421c-977d-1216334a9383" path="/var/lib/kubelet/pods/cadf5c48-6db4-421c-977d-1216334a9383/volumes" Mar 20 16:38:07 crc kubenswrapper[4730]: I0320 16:38:07.669849 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2ng72"] Mar 20 16:38:07 crc kubenswrapper[4730]: E0320 16:38:07.670290 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7178ffb9-4891-485e-b3d6-d7fcc8f22ef4" containerName="oc" Mar 20 16:38:07 crc kubenswrapper[4730]: I0320 16:38:07.670306 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="7178ffb9-4891-485e-b3d6-d7fcc8f22ef4" containerName="oc" Mar 20 16:38:07 crc kubenswrapper[4730]: I0320 16:38:07.670495 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="7178ffb9-4891-485e-b3d6-d7fcc8f22ef4" containerName="oc" Mar 20 16:38:07 crc kubenswrapper[4730]: I0320 16:38:07.679938 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2ng72" Mar 20 16:38:07 crc kubenswrapper[4730]: I0320 16:38:07.693701 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2ng72"] Mar 20 16:38:07 crc kubenswrapper[4730]: I0320 16:38:07.781637 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab-catalog-content\") pod \"redhat-operators-2ng72\" (UID: \"6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab\") " pod="openshift-marketplace/redhat-operators-2ng72" Mar 20 16:38:07 crc kubenswrapper[4730]: I0320 16:38:07.781743 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcflj\" (UniqueName: \"kubernetes.io/projected/6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab-kube-api-access-mcflj\") pod \"redhat-operators-2ng72\" (UID: \"6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab\") " pod="openshift-marketplace/redhat-operators-2ng72" Mar 20 16:38:07 crc kubenswrapper[4730]: I0320 16:38:07.781891 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab-utilities\") pod \"redhat-operators-2ng72\" (UID: \"6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab\") " pod="openshift-marketplace/redhat-operators-2ng72" Mar 20 16:38:07 crc kubenswrapper[4730]: I0320 16:38:07.883774 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab-catalog-content\") pod \"redhat-operators-2ng72\" (UID: \"6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab\") " pod="openshift-marketplace/redhat-operators-2ng72" Mar 20 16:38:07 crc kubenswrapper[4730]: I0320 16:38:07.883879 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcflj\" (UniqueName: \"kubernetes.io/projected/6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab-kube-api-access-mcflj\") pod \"redhat-operators-2ng72\" (UID: \"6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab\") " pod="openshift-marketplace/redhat-operators-2ng72" Mar 20 16:38:07 crc kubenswrapper[4730]: I0320 16:38:07.884010 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab-utilities\") pod \"redhat-operators-2ng72\" (UID: \"6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab\") " pod="openshift-marketplace/redhat-operators-2ng72" Mar 20 16:38:07 crc kubenswrapper[4730]: I0320 16:38:07.884377 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab-catalog-content\") pod \"redhat-operators-2ng72\" (UID: \"6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab\") " pod="openshift-marketplace/redhat-operators-2ng72" Mar 20 16:38:07 crc kubenswrapper[4730]: I0320 16:38:07.884510 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab-utilities\") pod \"redhat-operators-2ng72\" (UID: \"6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab\") " pod="openshift-marketplace/redhat-operators-2ng72" Mar 20 16:38:07 crc kubenswrapper[4730]: I0320 16:38:07.912545 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcflj\" (UniqueName: \"kubernetes.io/projected/6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab-kube-api-access-mcflj\") pod \"redhat-operators-2ng72\" (UID: \"6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab\") " pod="openshift-marketplace/redhat-operators-2ng72" Mar 20 16:38:08 crc kubenswrapper[4730]: I0320 16:38:08.040458 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2ng72" Mar 20 16:38:08 crc kubenswrapper[4730]: W0320 16:38:08.531973 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e2c22ab_f597_4ec1_b66d_a8b80ddae7ab.slice/crio-2a8b57ef0999e1507449e57f62b329f91fdc4c79646cc432d53ec5cf91fac4d1 WatchSource:0}: Error finding container 2a8b57ef0999e1507449e57f62b329f91fdc4c79646cc432d53ec5cf91fac4d1: Status 404 returned error can't find the container with id 2a8b57ef0999e1507449e57f62b329f91fdc4c79646cc432d53ec5cf91fac4d1 Mar 20 16:38:08 crc kubenswrapper[4730]: I0320 16:38:08.531986 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2ng72"] Mar 20 16:38:09 crc kubenswrapper[4730]: I0320 16:38:09.219528 4730 generic.go:334] "Generic (PLEG): container finished" podID="6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab" containerID="41fb40d724ce9bcdac4ab8d8841562939f9e3bf8f3ffd4e8bc708e5977e01b6f" exitCode=0 Mar 20 16:38:09 crc kubenswrapper[4730]: I0320 16:38:09.219635 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ng72" event={"ID":"6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab","Type":"ContainerDied","Data":"41fb40d724ce9bcdac4ab8d8841562939f9e3bf8f3ffd4e8bc708e5977e01b6f"} Mar 20 16:38:09 crc kubenswrapper[4730]: I0320 16:38:09.220084 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ng72" event={"ID":"6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab","Type":"ContainerStarted","Data":"2a8b57ef0999e1507449e57f62b329f91fdc4c79646cc432d53ec5cf91fac4d1"} Mar 20 16:38:10 crc kubenswrapper[4730]: I0320 16:38:10.237147 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ng72" event={"ID":"6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab","Type":"ContainerStarted","Data":"fa67c25c48f3868aa77adc40c5b76a87682169dff1c5d76306550f084ee3a7c7"} Mar 20 16:38:16 crc kubenswrapper[4730]: I0320 16:38:16.295665 4730 generic.go:334] "Generic (PLEG): container finished" podID="6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab" containerID="fa67c25c48f3868aa77adc40c5b76a87682169dff1c5d76306550f084ee3a7c7" exitCode=0 Mar 20 16:38:16 crc kubenswrapper[4730]: I0320 16:38:16.295749 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ng72" event={"ID":"6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab","Type":"ContainerDied","Data":"fa67c25c48f3868aa77adc40c5b76a87682169dff1c5d76306550f084ee3a7c7"} Mar 20 16:38:17 crc kubenswrapper[4730]: I0320 16:38:17.308010 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ng72" event={"ID":"6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab","Type":"ContainerStarted","Data":"1b4d30d81ff77e2ae0201a4ec1d98e674f270f1721ed98bd7f2f431853c8df6f"} Mar 20 16:38:17 crc kubenswrapper[4730]: I0320 16:38:17.333679 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2ng72" podStartSLOduration=2.8249934740000002 podStartE2EDuration="10.333659933s" podCreationTimestamp="2026-03-20 16:38:07 +0000 UTC" firstStartedPulling="2026-03-20 16:38:09.221403347 +0000 UTC m=+3548.434774716" lastFinishedPulling="2026-03-20 16:38:16.730069796 +0000 UTC m=+3555.943441175" observedRunningTime="2026-03-20 16:38:17.328805106 +0000 UTC m=+3556.542176485" watchObservedRunningTime="2026-03-20 16:38:17.333659933 +0000 UTC m=+3556.547031302" Mar 20 16:38:18 crc kubenswrapper[4730]: I0320 16:38:18.040684 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2ng72" Mar 20 16:38:18 crc kubenswrapper[4730]: I0320 16:38:18.041095 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2ng72" Mar 20 16:38:19 crc kubenswrapper[4730]: I0320 16:38:19.138714 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2ng72" podUID="6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab" containerName="registry-server" probeResult="failure" output=< Mar 20 16:38:19 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Mar 20 16:38:19 crc kubenswrapper[4730]: > Mar 20 16:38:28 crc kubenswrapper[4730]: I0320 16:38:28.099644 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2ng72" Mar 20 16:38:28 crc kubenswrapper[4730]: I0320 16:38:28.165465 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2ng72" Mar 20 16:38:28 crc kubenswrapper[4730]: I0320 16:38:28.339117 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2ng72"] Mar 20 16:38:29 crc kubenswrapper[4730]: I0320 16:38:29.423513 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2ng72" podUID="6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab" containerName="registry-server" containerID="cri-o://1b4d30d81ff77e2ae0201a4ec1d98e674f270f1721ed98bd7f2f431853c8df6f" gracePeriod=2 Mar 20 16:38:29 crc kubenswrapper[4730]: I0320 16:38:29.929613 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2ng72" Mar 20 16:38:29 crc kubenswrapper[4730]: I0320 16:38:29.942967 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab-utilities\") pod \"6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab\" (UID: \"6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab\") " Mar 20 16:38:29 crc kubenswrapper[4730]: I0320 16:38:29.943168 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcflj\" (UniqueName: \"kubernetes.io/projected/6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab-kube-api-access-mcflj\") pod \"6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab\" (UID: \"6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab\") " Mar 20 16:38:29 crc kubenswrapper[4730]: I0320 16:38:29.943626 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab-catalog-content\") pod \"6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab\" (UID: \"6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab\") " Mar 20 16:38:29 crc kubenswrapper[4730]: I0320 16:38:29.943933 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab-utilities" (OuterVolumeSpecName: "utilities") pod "6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab" (UID: "6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:38:29 crc kubenswrapper[4730]: I0320 16:38:29.952219 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab-kube-api-access-mcflj" (OuterVolumeSpecName: "kube-api-access-mcflj") pod "6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab" (UID: "6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab"). InnerVolumeSpecName "kube-api-access-mcflj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:38:29 crc kubenswrapper[4730]: I0320 16:38:29.972427 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 16:38:29 crc kubenswrapper[4730]: I0320 16:38:29.972486 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcflj\" (UniqueName: \"kubernetes.io/projected/6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab-kube-api-access-mcflj\") on node \"crc\" DevicePath \"\"" Mar 20 16:38:30 crc kubenswrapper[4730]: I0320 16:38:30.107701 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab" (UID: "6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:38:30 crc kubenswrapper[4730]: I0320 16:38:30.176852 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 16:38:30 crc kubenswrapper[4730]: I0320 16:38:30.438023 4730 generic.go:334] "Generic (PLEG): container finished" podID="6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab" containerID="1b4d30d81ff77e2ae0201a4ec1d98e674f270f1721ed98bd7f2f431853c8df6f" exitCode=0 Mar 20 16:38:30 crc kubenswrapper[4730]: I0320 16:38:30.438066 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ng72" event={"ID":"6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab","Type":"ContainerDied","Data":"1b4d30d81ff77e2ae0201a4ec1d98e674f270f1721ed98bd7f2f431853c8df6f"} Mar 20 16:38:30 crc kubenswrapper[4730]: I0320 16:38:30.438093 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ng72" event={"ID":"6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab","Type":"ContainerDied","Data":"2a8b57ef0999e1507449e57f62b329f91fdc4c79646cc432d53ec5cf91fac4d1"} Mar 20 16:38:30 crc kubenswrapper[4730]: I0320 16:38:30.438111 4730 scope.go:117] "RemoveContainer" containerID="1b4d30d81ff77e2ae0201a4ec1d98e674f270f1721ed98bd7f2f431853c8df6f" Mar 20 16:38:30 crc kubenswrapper[4730]: I0320 16:38:30.438121 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2ng72" Mar 20 16:38:30 crc kubenswrapper[4730]: I0320 16:38:30.462377 4730 scope.go:117] "RemoveContainer" containerID="fa67c25c48f3868aa77adc40c5b76a87682169dff1c5d76306550f084ee3a7c7" Mar 20 16:38:30 crc kubenswrapper[4730]: I0320 16:38:30.484155 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2ng72"] Mar 20 16:38:30 crc kubenswrapper[4730]: I0320 16:38:30.486163 4730 scope.go:117] "RemoveContainer" containerID="41fb40d724ce9bcdac4ab8d8841562939f9e3bf8f3ffd4e8bc708e5977e01b6f" Mar 20 16:38:30 crc kubenswrapper[4730]: I0320 16:38:30.493377 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2ng72"] Mar 20 16:38:30 crc kubenswrapper[4730]: I0320 16:38:30.528932 4730 scope.go:117] "RemoveContainer" containerID="1b4d30d81ff77e2ae0201a4ec1d98e674f270f1721ed98bd7f2f431853c8df6f" Mar 20 16:38:30 crc kubenswrapper[4730]: E0320 16:38:30.529359 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b4d30d81ff77e2ae0201a4ec1d98e674f270f1721ed98bd7f2f431853c8df6f\": container with ID starting with 1b4d30d81ff77e2ae0201a4ec1d98e674f270f1721ed98bd7f2f431853c8df6f not found: ID does not exist" containerID="1b4d30d81ff77e2ae0201a4ec1d98e674f270f1721ed98bd7f2f431853c8df6f" Mar 20 16:38:30 crc kubenswrapper[4730]: I0320 16:38:30.529422 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b4d30d81ff77e2ae0201a4ec1d98e674f270f1721ed98bd7f2f431853c8df6f"} err="failed to get container status \"1b4d30d81ff77e2ae0201a4ec1d98e674f270f1721ed98bd7f2f431853c8df6f\": rpc error: code = NotFound desc = could not find container \"1b4d30d81ff77e2ae0201a4ec1d98e674f270f1721ed98bd7f2f431853c8df6f\": container with ID starting with 1b4d30d81ff77e2ae0201a4ec1d98e674f270f1721ed98bd7f2f431853c8df6f not found: ID does not exist" Mar 20 16:38:30 crc kubenswrapper[4730]: I0320 16:38:30.529461 4730 scope.go:117] "RemoveContainer" containerID="fa67c25c48f3868aa77adc40c5b76a87682169dff1c5d76306550f084ee3a7c7" Mar 20 16:38:30 crc kubenswrapper[4730]: E0320 16:38:30.529773 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa67c25c48f3868aa77adc40c5b76a87682169dff1c5d76306550f084ee3a7c7\": container with ID starting with fa67c25c48f3868aa77adc40c5b76a87682169dff1c5d76306550f084ee3a7c7 not found: ID does not exist" containerID="fa67c25c48f3868aa77adc40c5b76a87682169dff1c5d76306550f084ee3a7c7" Mar 20 16:38:30 crc kubenswrapper[4730]: I0320 16:38:30.529822 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa67c25c48f3868aa77adc40c5b76a87682169dff1c5d76306550f084ee3a7c7"} err="failed to get container status \"fa67c25c48f3868aa77adc40c5b76a87682169dff1c5d76306550f084ee3a7c7\": rpc error: code = NotFound desc = could not find container \"fa67c25c48f3868aa77adc40c5b76a87682169dff1c5d76306550f084ee3a7c7\": container with ID starting with fa67c25c48f3868aa77adc40c5b76a87682169dff1c5d76306550f084ee3a7c7 not found: ID does not exist" Mar 20 16:38:30 crc kubenswrapper[4730]: I0320 16:38:30.529857 4730 scope.go:117] "RemoveContainer" containerID="41fb40d724ce9bcdac4ab8d8841562939f9e3bf8f3ffd4e8bc708e5977e01b6f" Mar 20 16:38:30 crc kubenswrapper[4730]: E0320 16:38:30.530152 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41fb40d724ce9bcdac4ab8d8841562939f9e3bf8f3ffd4e8bc708e5977e01b6f\": container with ID starting with 41fb40d724ce9bcdac4ab8d8841562939f9e3bf8f3ffd4e8bc708e5977e01b6f not found: ID does not exist" containerID="41fb40d724ce9bcdac4ab8d8841562939f9e3bf8f3ffd4e8bc708e5977e01b6f" Mar 20 16:38:30 crc kubenswrapper[4730]: I0320 16:38:30.530200 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41fb40d724ce9bcdac4ab8d8841562939f9e3bf8f3ffd4e8bc708e5977e01b6f"} err="failed to get container status \"41fb40d724ce9bcdac4ab8d8841562939f9e3bf8f3ffd4e8bc708e5977e01b6f\": rpc error: code = NotFound desc = could not find container \"41fb40d724ce9bcdac4ab8d8841562939f9e3bf8f3ffd4e8bc708e5977e01b6f\": container with ID starting with 41fb40d724ce9bcdac4ab8d8841562939f9e3bf8f3ffd4e8bc708e5977e01b6f not found: ID does not exist" Mar 20 16:38:31 crc kubenswrapper[4730]: I0320 16:38:31.544396 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab" path="/var/lib/kubelet/pods/6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab/volumes" Mar 20 16:38:33 crc kubenswrapper[4730]: I0320 16:38:33.549266 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sfrtb"] Mar 20 16:38:33 crc kubenswrapper[4730]: E0320 16:38:33.549963 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab" containerName="extract-content" Mar 20 16:38:33 crc kubenswrapper[4730]: I0320 16:38:33.549977 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab" containerName="extract-content" Mar 20 16:38:33 crc kubenswrapper[4730]: E0320 16:38:33.549989 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab" containerName="registry-server" Mar 20 16:38:33 crc kubenswrapper[4730]: I0320 16:38:33.549996 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab" containerName="registry-server" Mar 20 16:38:33 crc kubenswrapper[4730]: E0320 16:38:33.550028 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab" containerName="extract-utilities" Mar 20 16:38:33 crc kubenswrapper[4730]: I0320 16:38:33.550035 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab" containerName="extract-utilities" Mar 20 16:38:33 crc kubenswrapper[4730]: I0320 16:38:33.550214 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e2c22ab-f597-4ec1-b66d-a8b80ddae7ab" containerName="registry-server" Mar 20 16:38:33 crc kubenswrapper[4730]: I0320 16:38:33.554508 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfrtb" Mar 20 16:38:33 crc kubenswrapper[4730]: I0320 16:38:33.569177 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sfrtb"] Mar 20 16:38:33 crc kubenswrapper[4730]: I0320 16:38:33.645837 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68e77c65-6357-459f-9bf2-fe45499cd296-utilities\") pod \"certified-operators-sfrtb\" (UID: \"68e77c65-6357-459f-9bf2-fe45499cd296\") " pod="openshift-marketplace/certified-operators-sfrtb" Mar 20 16:38:33 crc kubenswrapper[4730]: I0320 16:38:33.645935 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68e77c65-6357-459f-9bf2-fe45499cd296-catalog-content\") pod \"certified-operators-sfrtb\" (UID: \"68e77c65-6357-459f-9bf2-fe45499cd296\") " pod="openshift-marketplace/certified-operators-sfrtb" Mar 20 16:38:33 crc kubenswrapper[4730]: I0320 16:38:33.646107 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjswk\" (UniqueName: \"kubernetes.io/projected/68e77c65-6357-459f-9bf2-fe45499cd296-kube-api-access-cjswk\") pod \"certified-operators-sfrtb\" (UID: \"68e77c65-6357-459f-9bf2-fe45499cd296\") " pod="openshift-marketplace/certified-operators-sfrtb" Mar 20 16:38:33 crc kubenswrapper[4730]: I0320 16:38:33.749033 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68e77c65-6357-459f-9bf2-fe45499cd296-catalog-content\") pod \"certified-operators-sfrtb\" (UID: \"68e77c65-6357-459f-9bf2-fe45499cd296\") " pod="openshift-marketplace/certified-operators-sfrtb" Mar 20 16:38:33 crc kubenswrapper[4730]: I0320 16:38:33.749199 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjswk\" (UniqueName: \"kubernetes.io/projected/68e77c65-6357-459f-9bf2-fe45499cd296-kube-api-access-cjswk\") pod \"certified-operators-sfrtb\" (UID: \"68e77c65-6357-459f-9bf2-fe45499cd296\") " pod="openshift-marketplace/certified-operators-sfrtb" Mar 20 16:38:33 crc kubenswrapper[4730]: I0320 16:38:33.749307 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68e77c65-6357-459f-9bf2-fe45499cd296-utilities\") pod \"certified-operators-sfrtb\" (UID: \"68e77c65-6357-459f-9bf2-fe45499cd296\") " pod="openshift-marketplace/certified-operators-sfrtb" Mar 20 16:38:33 crc kubenswrapper[4730]: I0320 16:38:33.749642 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68e77c65-6357-459f-9bf2-fe45499cd296-catalog-content\") pod \"certified-operators-sfrtb\" (UID: \"68e77c65-6357-459f-9bf2-fe45499cd296\") " pod="openshift-marketplace/certified-operators-sfrtb" Mar 20 16:38:33 crc kubenswrapper[4730]: I0320 16:38:33.749674 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68e77c65-6357-459f-9bf2-fe45499cd296-utilities\") pod \"certified-operators-sfrtb\" (UID: \"68e77c65-6357-459f-9bf2-fe45499cd296\") " pod="openshift-marketplace/certified-operators-sfrtb" Mar 20 16:38:33 crc kubenswrapper[4730]: I0320 16:38:33.774138 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjswk\" (UniqueName: \"kubernetes.io/projected/68e77c65-6357-459f-9bf2-fe45499cd296-kube-api-access-cjswk\") pod \"certified-operators-sfrtb\" (UID: \"68e77c65-6357-459f-9bf2-fe45499cd296\") " pod="openshift-marketplace/certified-operators-sfrtb" Mar 20 16:38:33 crc kubenswrapper[4730]: I0320 16:38:33.882548 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfrtb" Mar 20 16:38:34 crc kubenswrapper[4730]: I0320 16:38:34.442382 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sfrtb"] Mar 20 16:38:34 crc kubenswrapper[4730]: I0320 16:38:34.476079 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfrtb" event={"ID":"68e77c65-6357-459f-9bf2-fe45499cd296","Type":"ContainerStarted","Data":"8326f9a3070f09a5c1a65a65c0238d53148283f463d30f8f66946ba3d27b4572"} Mar 20 16:38:35 crc kubenswrapper[4730]: I0320 16:38:35.489699 4730 generic.go:334] "Generic (PLEG): container finished" podID="68e77c65-6357-459f-9bf2-fe45499cd296" containerID="2c0196867229753a5d5133a1c1a6d4ae5892cbd2d676b231f509ca4edf696469" exitCode=0 Mar 20 16:38:35 crc kubenswrapper[4730]: I0320 16:38:35.489756 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfrtb" event={"ID":"68e77c65-6357-459f-9bf2-fe45499cd296","Type":"ContainerDied","Data":"2c0196867229753a5d5133a1c1a6d4ae5892cbd2d676b231f509ca4edf696469"} Mar 20 16:38:37 crc kubenswrapper[4730]: I0320 16:38:37.510301 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfrtb" event={"ID":"68e77c65-6357-459f-9bf2-fe45499cd296","Type":"ContainerStarted","Data":"d569825b7078b64e8055d1b820560032b6386e4536c814bef30f2c5e7dab2c36"} Mar 20 16:38:38 crc kubenswrapper[4730]: I0320 16:38:38.520743 4730 generic.go:334] "Generic (PLEG): container finished" podID="68e77c65-6357-459f-9bf2-fe45499cd296" containerID="d569825b7078b64e8055d1b820560032b6386e4536c814bef30f2c5e7dab2c36" exitCode=0 Mar 20 16:38:38 crc kubenswrapper[4730]: I0320 16:38:38.520798 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfrtb" event={"ID":"68e77c65-6357-459f-9bf2-fe45499cd296","Type":"ContainerDied","Data":"d569825b7078b64e8055d1b820560032b6386e4536c814bef30f2c5e7dab2c36"} Mar 20 16:38:39 crc kubenswrapper[4730]: I0320 16:38:39.543724 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfrtb" event={"ID":"68e77c65-6357-459f-9bf2-fe45499cd296","Type":"ContainerStarted","Data":"99b4b38b434690826e45d27a5d067e4e0e256417cb7003c55df7e43f31c807a8"} Mar 20 16:38:39 crc kubenswrapper[4730]: I0320 16:38:39.554139 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sfrtb" podStartSLOduration=3.019910419 podStartE2EDuration="6.554115252s" podCreationTimestamp="2026-03-20 16:38:33 +0000 UTC" firstStartedPulling="2026-03-20 16:38:35.492200618 +0000 UTC m=+3574.705571987" lastFinishedPulling="2026-03-20 16:38:39.026405451 +0000 UTC m=+3578.239776820" observedRunningTime="2026-03-20 16:38:39.549842421 +0000 UTC m=+3578.763213790" watchObservedRunningTime="2026-03-20 16:38:39.554115252 +0000 UTC m=+3578.767486621" Mar 20 16:38:43 crc kubenswrapper[4730]: I0320 16:38:43.883413 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sfrtb" Mar 20 16:38:43 crc kubenswrapper[4730]: I0320 16:38:43.884016 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sfrtb" Mar 20 16:38:43 crc kubenswrapper[4730]: I0320 16:38:43.929938 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sfrtb" Mar 20 16:38:44 crc kubenswrapper[4730]: I0320 16:38:44.661807 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sfrtb" Mar 20 16:38:44 crc kubenswrapper[4730]: I0320 16:38:44.715911 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sfrtb"] Mar 20 16:38:46 crc kubenswrapper[4730]: I0320 16:38:46.627724 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sfrtb" podUID="68e77c65-6357-459f-9bf2-fe45499cd296" containerName="registry-server" containerID="cri-o://99b4b38b434690826e45d27a5d067e4e0e256417cb7003c55df7e43f31c807a8" gracePeriod=2 Mar 20 16:38:47 crc kubenswrapper[4730]: I0320 16:38:47.193288 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfrtb" Mar 20 16:38:47 crc kubenswrapper[4730]: I0320 16:38:47.245222 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjswk\" (UniqueName: \"kubernetes.io/projected/68e77c65-6357-459f-9bf2-fe45499cd296-kube-api-access-cjswk\") pod \"68e77c65-6357-459f-9bf2-fe45499cd296\" (UID: \"68e77c65-6357-459f-9bf2-fe45499cd296\") " Mar 20 16:38:47 crc kubenswrapper[4730]: I0320 16:38:47.245366 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68e77c65-6357-459f-9bf2-fe45499cd296-utilities\") pod \"68e77c65-6357-459f-9bf2-fe45499cd296\" (UID: \"68e77c65-6357-459f-9bf2-fe45499cd296\") " Mar 20 16:38:47 crc kubenswrapper[4730]: I0320 16:38:47.245424 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68e77c65-6357-459f-9bf2-fe45499cd296-catalog-content\") pod \"68e77c65-6357-459f-9bf2-fe45499cd296\" (UID: \"68e77c65-6357-459f-9bf2-fe45499cd296\") " Mar 20 16:38:47 crc kubenswrapper[4730]: I0320 16:38:47.247744 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68e77c65-6357-459f-9bf2-fe45499cd296-utilities" (OuterVolumeSpecName: "utilities") pod "68e77c65-6357-459f-9bf2-fe45499cd296" (UID: "68e77c65-6357-459f-9bf2-fe45499cd296"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:38:47 crc kubenswrapper[4730]: I0320 16:38:47.268450 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68e77c65-6357-459f-9bf2-fe45499cd296-kube-api-access-cjswk" (OuterVolumeSpecName: "kube-api-access-cjswk") pod "68e77c65-6357-459f-9bf2-fe45499cd296" (UID: "68e77c65-6357-459f-9bf2-fe45499cd296"). InnerVolumeSpecName "kube-api-access-cjswk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:38:47 crc kubenswrapper[4730]: I0320 16:38:47.346873 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjswk\" (UniqueName: \"kubernetes.io/projected/68e77c65-6357-459f-9bf2-fe45499cd296-kube-api-access-cjswk\") on node \"crc\" DevicePath \"\"" Mar 20 16:38:47 crc kubenswrapper[4730]: I0320 16:38:47.346935 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68e77c65-6357-459f-9bf2-fe45499cd296-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 16:38:47 crc kubenswrapper[4730]: I0320 16:38:47.644628 4730 generic.go:334] "Generic (PLEG): container finished" podID="68e77c65-6357-459f-9bf2-fe45499cd296" containerID="99b4b38b434690826e45d27a5d067e4e0e256417cb7003c55df7e43f31c807a8" exitCode=0 Mar 20 16:38:47 crc kubenswrapper[4730]: I0320 16:38:47.644708 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfrtb" event={"ID":"68e77c65-6357-459f-9bf2-fe45499cd296","Type":"ContainerDied","Data":"99b4b38b434690826e45d27a5d067e4e0e256417cb7003c55df7e43f31c807a8"} Mar 20 16:38:47 crc kubenswrapper[4730]: I0320 16:38:47.644757 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfrtb" event={"ID":"68e77c65-6357-459f-9bf2-fe45499cd296","Type":"ContainerDied","Data":"8326f9a3070f09a5c1a65a65c0238d53148283f463d30f8f66946ba3d27b4572"} Mar 20 16:38:47 crc kubenswrapper[4730]: I0320 16:38:47.644787 4730 scope.go:117] "RemoveContainer" containerID="99b4b38b434690826e45d27a5d067e4e0e256417cb7003c55df7e43f31c807a8" Mar 20 16:38:47 crc kubenswrapper[4730]: I0320 16:38:47.645118 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfrtb" Mar 20 16:38:47 crc kubenswrapper[4730]: I0320 16:38:47.668895 4730 scope.go:117] "RemoveContainer" containerID="d569825b7078b64e8055d1b820560032b6386e4536c814bef30f2c5e7dab2c36" Mar 20 16:38:47 crc kubenswrapper[4730]: I0320 16:38:47.695044 4730 scope.go:117] "RemoveContainer" containerID="2c0196867229753a5d5133a1c1a6d4ae5892cbd2d676b231f509ca4edf696469" Mar 20 16:38:47 crc kubenswrapper[4730]: I0320 16:38:47.762177 4730 scope.go:117] "RemoveContainer" containerID="99b4b38b434690826e45d27a5d067e4e0e256417cb7003c55df7e43f31c807a8" Mar 20 16:38:47 crc kubenswrapper[4730]: E0320 16:38:47.762605 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99b4b38b434690826e45d27a5d067e4e0e256417cb7003c55df7e43f31c807a8\": container with ID starting with 99b4b38b434690826e45d27a5d067e4e0e256417cb7003c55df7e43f31c807a8 not found: ID does not exist" containerID="99b4b38b434690826e45d27a5d067e4e0e256417cb7003c55df7e43f31c807a8" Mar 20 16:38:47 crc kubenswrapper[4730]: I0320 16:38:47.762636 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99b4b38b434690826e45d27a5d067e4e0e256417cb7003c55df7e43f31c807a8"} err="failed to get container status \"99b4b38b434690826e45d27a5d067e4e0e256417cb7003c55df7e43f31c807a8\": rpc error: code = NotFound desc = could not find container \"99b4b38b434690826e45d27a5d067e4e0e256417cb7003c55df7e43f31c807a8\": container with ID starting with 99b4b38b434690826e45d27a5d067e4e0e256417cb7003c55df7e43f31c807a8 not found: ID does not exist" Mar 20 16:38:47 crc kubenswrapper[4730]: I0320 16:38:47.762658 4730 scope.go:117] "RemoveContainer" containerID="d569825b7078b64e8055d1b820560032b6386e4536c814bef30f2c5e7dab2c36" Mar 20 16:38:47 crc kubenswrapper[4730]: E0320 16:38:47.763067 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d569825b7078b64e8055d1b820560032b6386e4536c814bef30f2c5e7dab2c36\": container with ID starting with d569825b7078b64e8055d1b820560032b6386e4536c814bef30f2c5e7dab2c36 not found: ID does not exist" containerID="d569825b7078b64e8055d1b820560032b6386e4536c814bef30f2c5e7dab2c36" Mar 20 16:38:47 crc kubenswrapper[4730]: I0320 16:38:47.763090 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d569825b7078b64e8055d1b820560032b6386e4536c814bef30f2c5e7dab2c36"} err="failed to get container status \"d569825b7078b64e8055d1b820560032b6386e4536c814bef30f2c5e7dab2c36\": rpc error: code = NotFound desc = could not find container \"d569825b7078b64e8055d1b820560032b6386e4536c814bef30f2c5e7dab2c36\": container with ID starting with d569825b7078b64e8055d1b820560032b6386e4536c814bef30f2c5e7dab2c36 not found: ID does not exist" Mar 20 16:38:47 crc kubenswrapper[4730]: I0320 16:38:47.763107 4730 scope.go:117] "RemoveContainer" containerID="2c0196867229753a5d5133a1c1a6d4ae5892cbd2d676b231f509ca4edf696469" Mar 20 16:38:47 crc kubenswrapper[4730]: E0320 16:38:47.763498 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c0196867229753a5d5133a1c1a6d4ae5892cbd2d676b231f509ca4edf696469\": container with ID starting with 2c0196867229753a5d5133a1c1a6d4ae5892cbd2d676b231f509ca4edf696469 not found: ID does not exist" containerID="2c0196867229753a5d5133a1c1a6d4ae5892cbd2d676b231f509ca4edf696469" Mar 20 16:38:47 crc kubenswrapper[4730]: I0320 16:38:47.763544 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c0196867229753a5d5133a1c1a6d4ae5892cbd2d676b231f509ca4edf696469"} err="failed to get container status \"2c0196867229753a5d5133a1c1a6d4ae5892cbd2d676b231f509ca4edf696469\": rpc error: code = NotFound desc = could not find container \"2c0196867229753a5d5133a1c1a6d4ae5892cbd2d676b231f509ca4edf696469\": container with ID starting with 2c0196867229753a5d5133a1c1a6d4ae5892cbd2d676b231f509ca4edf696469 not found: ID does not exist" Mar 20 16:38:47 crc kubenswrapper[4730]: I0320 16:38:47.825965 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68e77c65-6357-459f-9bf2-fe45499cd296-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68e77c65-6357-459f-9bf2-fe45499cd296" (UID: "68e77c65-6357-459f-9bf2-fe45499cd296"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:38:47 crc kubenswrapper[4730]: I0320 16:38:47.853225 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68e77c65-6357-459f-9bf2-fe45499cd296-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 16:38:47 crc kubenswrapper[4730]: I0320 16:38:47.987740 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sfrtb"] Mar 20 16:38:48 crc kubenswrapper[4730]: I0320 16:38:48.001033 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sfrtb"] Mar 20 16:38:49 crc kubenswrapper[4730]: I0320 16:38:49.550544 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68e77c65-6357-459f-9bf2-fe45499cd296" path="/var/lib/kubelet/pods/68e77c65-6357-459f-9bf2-fe45499cd296/volumes" Mar 20 16:38:50 crc kubenswrapper[4730]: I0320 16:38:50.785180 4730 scope.go:117] "RemoveContainer" containerID="e43508074aa0c7c7c61cb53a8852f8061943211007b9394c89ac6a8a6c904123" Mar 20 16:40:00 crc kubenswrapper[4730]: I0320 16:40:00.161513 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567080-445j8"] Mar 20 16:40:00 crc kubenswrapper[4730]: E0320 16:40:00.162441 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e77c65-6357-459f-9bf2-fe45499cd296" containerName="extract-content" Mar 20 16:40:00 crc kubenswrapper[4730]: I0320 16:40:00.162455 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e77c65-6357-459f-9bf2-fe45499cd296" containerName="extract-content" Mar 20 16:40:00 crc kubenswrapper[4730]: E0320 16:40:00.162475 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e77c65-6357-459f-9bf2-fe45499cd296" containerName="registry-server" Mar 20 16:40:00 crc kubenswrapper[4730]: I0320 16:40:00.162481 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e77c65-6357-459f-9bf2-fe45499cd296" containerName="registry-server" Mar 20 16:40:00 crc kubenswrapper[4730]: E0320 16:40:00.162529 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e77c65-6357-459f-9bf2-fe45499cd296" containerName="extract-utilities" Mar 20 16:40:00 crc kubenswrapper[4730]: I0320 16:40:00.162536 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e77c65-6357-459f-9bf2-fe45499cd296" containerName="extract-utilities" Mar 20 16:40:00 crc kubenswrapper[4730]: I0320 16:40:00.162719 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e77c65-6357-459f-9bf2-fe45499cd296" containerName="registry-server" Mar 20 16:40:00 crc kubenswrapper[4730]: I0320 16:40:00.163402 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567080-445j8" Mar 20 16:40:00 crc kubenswrapper[4730]: I0320 16:40:00.166566 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:40:00 crc kubenswrapper[4730]: I0320 16:40:00.166604 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:40:00 crc kubenswrapper[4730]: I0320 16:40:00.166958 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:40:00 crc kubenswrapper[4730]: I0320 16:40:00.171402 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567080-445j8"] Mar 20 16:40:00 crc kubenswrapper[4730]: I0320 16:40:00.280387 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgq9g\" (UniqueName: \"kubernetes.io/projected/9b8e3816-7082-4505-847c-880b40d33930-kube-api-access-vgq9g\") pod \"auto-csr-approver-29567080-445j8\" (UID: \"9b8e3816-7082-4505-847c-880b40d33930\") " pod="openshift-infra/auto-csr-approver-29567080-445j8" Mar 20 16:40:00 crc kubenswrapper[4730]: I0320 16:40:00.382088 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgq9g\" (UniqueName: \"kubernetes.io/projected/9b8e3816-7082-4505-847c-880b40d33930-kube-api-access-vgq9g\") pod \"auto-csr-approver-29567080-445j8\" (UID: \"9b8e3816-7082-4505-847c-880b40d33930\") " pod="openshift-infra/auto-csr-approver-29567080-445j8" Mar 20 16:40:00 crc kubenswrapper[4730]: I0320 16:40:00.406836 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgq9g\" (UniqueName: \"kubernetes.io/projected/9b8e3816-7082-4505-847c-880b40d33930-kube-api-access-vgq9g\") pod \"auto-csr-approver-29567080-445j8\" (UID: \"9b8e3816-7082-4505-847c-880b40d33930\") " pod="openshift-infra/auto-csr-approver-29567080-445j8" Mar 20 16:40:00 crc kubenswrapper[4730]: I0320 16:40:00.484624 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567080-445j8" Mar 20 16:40:00 crc kubenswrapper[4730]: I0320 16:40:00.977451 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567080-445j8"] Mar 20 16:40:00 crc kubenswrapper[4730]: I0320 16:40:00.977979 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 16:40:01 crc kubenswrapper[4730]: I0320 16:40:01.311610 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567080-445j8" event={"ID":"9b8e3816-7082-4505-847c-880b40d33930","Type":"ContainerStarted","Data":"3936d873f1da84ce2f7ff062b900f2b0100ea3cfeea943d681e1395f94bb332b"} Mar 20 16:40:02 crc kubenswrapper[4730]: I0320 16:40:02.321358 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567080-445j8" event={"ID":"9b8e3816-7082-4505-847c-880b40d33930","Type":"ContainerStarted","Data":"86dd9cb2df6336d37948551b33d5a151e10e12f60435fec8a924c6900e110929"} Mar 20 16:40:02 crc kubenswrapper[4730]: I0320 16:40:02.338294 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29567080-445j8" podStartSLOduration=1.3194325980000001 podStartE2EDuration="2.338273392s" podCreationTimestamp="2026-03-20 16:40:00 +0000 UTC" firstStartedPulling="2026-03-20 16:40:00.977771747 +0000 UTC m=+3660.191143116" lastFinishedPulling="2026-03-20 16:40:01.996612541 +0000 UTC m=+3661.209983910" observedRunningTime="2026-03-20 16:40:02.335289198 +0000 UTC m=+3661.548660587" watchObservedRunningTime="2026-03-20 16:40:02.338273392 +0000 UTC m=+3661.551644761" Mar 20 16:40:03 crc kubenswrapper[4730]: I0320 16:40:03.330746 4730 generic.go:334] "Generic (PLEG): container finished" podID="9b8e3816-7082-4505-847c-880b40d33930" containerID="86dd9cb2df6336d37948551b33d5a151e10e12f60435fec8a924c6900e110929" exitCode=0 Mar 20 16:40:03 crc kubenswrapper[4730]: I0320 16:40:03.330848 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567080-445j8" event={"ID":"9b8e3816-7082-4505-847c-880b40d33930","Type":"ContainerDied","Data":"86dd9cb2df6336d37948551b33d5a151e10e12f60435fec8a924c6900e110929"} Mar 20 16:40:04 crc kubenswrapper[4730]: I0320 16:40:04.718401 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567080-445j8" Mar 20 16:40:04 crc kubenswrapper[4730]: I0320 16:40:04.790060 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgq9g\" (UniqueName: \"kubernetes.io/projected/9b8e3816-7082-4505-847c-880b40d33930-kube-api-access-vgq9g\") pod \"9b8e3816-7082-4505-847c-880b40d33930\" (UID: \"9b8e3816-7082-4505-847c-880b40d33930\") " Mar 20 16:40:04 crc kubenswrapper[4730]: I0320 16:40:04.798766 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b8e3816-7082-4505-847c-880b40d33930-kube-api-access-vgq9g" (OuterVolumeSpecName: "kube-api-access-vgq9g") pod "9b8e3816-7082-4505-847c-880b40d33930" (UID: "9b8e3816-7082-4505-847c-880b40d33930"). InnerVolumeSpecName "kube-api-access-vgq9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:40:04 crc kubenswrapper[4730]: I0320 16:40:04.894033 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgq9g\" (UniqueName: \"kubernetes.io/projected/9b8e3816-7082-4505-847c-880b40d33930-kube-api-access-vgq9g\") on node \"crc\" DevicePath \"\"" Mar 20 16:40:05 crc kubenswrapper[4730]: I0320 16:40:05.350828 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567080-445j8" event={"ID":"9b8e3816-7082-4505-847c-880b40d33930","Type":"ContainerDied","Data":"3936d873f1da84ce2f7ff062b900f2b0100ea3cfeea943d681e1395f94bb332b"} Mar 20 16:40:05 crc kubenswrapper[4730]: I0320 16:40:05.350863 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3936d873f1da84ce2f7ff062b900f2b0100ea3cfeea943d681e1395f94bb332b" Mar 20 16:40:05 crc kubenswrapper[4730]: I0320 16:40:05.350887 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567080-445j8" Mar 20 16:40:05 crc kubenswrapper[4730]: I0320 16:40:05.788200 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567074-2xrgq"] Mar 20 16:40:05 crc kubenswrapper[4730]: I0320 16:40:05.801652 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567074-2xrgq"] Mar 20 16:40:07 crc kubenswrapper[4730]: I0320 16:40:07.542732 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a649324-b73a-44e0-94e5-2b8c54476367" path="/var/lib/kubelet/pods/2a649324-b73a-44e0-94e5-2b8c54476367/volumes" Mar 20 16:40:12 crc kubenswrapper[4730]: I0320 16:40:12.879786 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:40:12 crc kubenswrapper[4730]: I0320 16:40:12.880457 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:40:42 crc kubenswrapper[4730]: I0320 16:40:42.880376 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:40:42 crc kubenswrapper[4730]: I0320 16:40:42.881943 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:40:50 crc kubenswrapper[4730]: I0320 16:40:50.919497 4730 scope.go:117] "RemoveContainer" containerID="2bb1a712fbfcbaa124ee788c9be392cdb5ddacf514828a35ba09574bc19839a4" Mar 20 16:41:12 crc kubenswrapper[4730]: I0320 16:41:12.880244 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:41:12 crc kubenswrapper[4730]: I0320 16:41:12.880744 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:41:12 crc kubenswrapper[4730]: I0320 16:41:12.880797 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 16:41:12 crc kubenswrapper[4730]: I0320 16:41:12.881720 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"941dc6e58516657b43df3ac7120bf6da060d2f3b3a6c41da62694a0c2e80f6c6"} pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 16:41:12 crc kubenswrapper[4730]: I0320 16:41:12.881793 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" containerID="cri-o://941dc6e58516657b43df3ac7120bf6da060d2f3b3a6c41da62694a0c2e80f6c6" gracePeriod=600 Mar 20 16:41:14 crc kubenswrapper[4730]: I0320 16:41:14.024275 4730 generic.go:334] "Generic (PLEG): container finished" podID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerID="941dc6e58516657b43df3ac7120bf6da060d2f3b3a6c41da62694a0c2e80f6c6" exitCode=0 Mar 20 16:41:14 crc kubenswrapper[4730]: I0320 16:41:14.024328 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerDied","Data":"941dc6e58516657b43df3ac7120bf6da060d2f3b3a6c41da62694a0c2e80f6c6"} Mar 20 16:41:14 crc kubenswrapper[4730]: I0320 16:41:14.024930 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerStarted","Data":"ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610"} Mar 20 16:41:14 crc kubenswrapper[4730]: I0320 16:41:14.024959 4730 scope.go:117] "RemoveContainer" containerID="0b0c904fd86d0d3236cc303102f0b32669f34531125a255be5df201b0dd8ef34" Mar 20 16:42:00 crc kubenswrapper[4730]: I0320 16:42:00.149995 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567082-597m4"] Mar 20 16:42:00 crc kubenswrapper[4730]: E0320 16:42:00.151068 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b8e3816-7082-4505-847c-880b40d33930" containerName="oc" Mar 20 16:42:00 crc kubenswrapper[4730]: I0320 16:42:00.151085 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b8e3816-7082-4505-847c-880b40d33930" containerName="oc" Mar 20 16:42:00 crc kubenswrapper[4730]: I0320 16:42:00.151377 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b8e3816-7082-4505-847c-880b40d33930" containerName="oc" Mar 20 16:42:00 crc kubenswrapper[4730]: I0320 16:42:00.152324 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567082-597m4" Mar 20 16:42:00 crc kubenswrapper[4730]: I0320 16:42:00.156098 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:42:00 crc kubenswrapper[4730]: I0320 16:42:00.156104 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:42:00 crc kubenswrapper[4730]: I0320 16:42:00.156914 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:42:00 crc kubenswrapper[4730]: I0320 16:42:00.160234 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567082-597m4"] Mar 20 16:42:00 crc kubenswrapper[4730]: I0320 16:42:00.225924 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjsc8\" (UniqueName: \"kubernetes.io/projected/713df8c0-cae2-4cfd-9ecf-66856a78c066-kube-api-access-mjsc8\") pod \"auto-csr-approver-29567082-597m4\" (UID: \"713df8c0-cae2-4cfd-9ecf-66856a78c066\") " pod="openshift-infra/auto-csr-approver-29567082-597m4" Mar 20 16:42:00 crc kubenswrapper[4730]: I0320 16:42:00.327657 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjsc8\" (UniqueName: \"kubernetes.io/projected/713df8c0-cae2-4cfd-9ecf-66856a78c066-kube-api-access-mjsc8\") pod \"auto-csr-approver-29567082-597m4\" (UID: \"713df8c0-cae2-4cfd-9ecf-66856a78c066\") " pod="openshift-infra/auto-csr-approver-29567082-597m4" Mar 20 16:42:00 crc kubenswrapper[4730]: I0320 16:42:00.352808 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjsc8\" (UniqueName: \"kubernetes.io/projected/713df8c0-cae2-4cfd-9ecf-66856a78c066-kube-api-access-mjsc8\") pod \"auto-csr-approver-29567082-597m4\" (UID: \"713df8c0-cae2-4cfd-9ecf-66856a78c066\") " pod="openshift-infra/auto-csr-approver-29567082-597m4" Mar 20 16:42:00 crc kubenswrapper[4730]: I0320 16:42:00.473898 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567082-597m4" Mar 20 16:42:00 crc kubenswrapper[4730]: I0320 16:42:00.952823 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567082-597m4"] Mar 20 16:42:01 crc kubenswrapper[4730]: I0320 16:42:01.464082 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567082-597m4" event={"ID":"713df8c0-cae2-4cfd-9ecf-66856a78c066","Type":"ContainerStarted","Data":"dbd6b9fa7b06d76ffe7fc201a8e7d674e8d261c5618fc8a84d4c08a89cda9426"} Mar 20 16:42:02 crc kubenswrapper[4730]: I0320 16:42:02.477137 4730 generic.go:334] "Generic (PLEG): container finished" podID="713df8c0-cae2-4cfd-9ecf-66856a78c066" containerID="44f54f3fc7434586ebe0f8d3b305da77181f85e1a148a72f852acf4f69b33aae" exitCode=0 Mar 20 16:42:02 crc kubenswrapper[4730]: I0320 16:42:02.477261 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567082-597m4" event={"ID":"713df8c0-cae2-4cfd-9ecf-66856a78c066","Type":"ContainerDied","Data":"44f54f3fc7434586ebe0f8d3b305da77181f85e1a148a72f852acf4f69b33aae"} Mar 20 16:42:03 crc kubenswrapper[4730]: I0320 16:42:03.827795 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567082-597m4" Mar 20 16:42:03 crc kubenswrapper[4730]: I0320 16:42:03.934143 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjsc8\" (UniqueName: \"kubernetes.io/projected/713df8c0-cae2-4cfd-9ecf-66856a78c066-kube-api-access-mjsc8\") pod \"713df8c0-cae2-4cfd-9ecf-66856a78c066\" (UID: \"713df8c0-cae2-4cfd-9ecf-66856a78c066\") " Mar 20 16:42:03 crc kubenswrapper[4730]: I0320 16:42:03.943442 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/713df8c0-cae2-4cfd-9ecf-66856a78c066-kube-api-access-mjsc8" (OuterVolumeSpecName: "kube-api-access-mjsc8") pod "713df8c0-cae2-4cfd-9ecf-66856a78c066" (UID: "713df8c0-cae2-4cfd-9ecf-66856a78c066"). InnerVolumeSpecName "kube-api-access-mjsc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:42:04 crc kubenswrapper[4730]: I0320 16:42:04.037954 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjsc8\" (UniqueName: \"kubernetes.io/projected/713df8c0-cae2-4cfd-9ecf-66856a78c066-kube-api-access-mjsc8\") on node \"crc\" DevicePath \"\"" Mar 20 16:42:04 crc kubenswrapper[4730]: I0320 16:42:04.494685 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567082-597m4" event={"ID":"713df8c0-cae2-4cfd-9ecf-66856a78c066","Type":"ContainerDied","Data":"dbd6b9fa7b06d76ffe7fc201a8e7d674e8d261c5618fc8a84d4c08a89cda9426"} Mar 20 16:42:04 crc kubenswrapper[4730]: I0320 16:42:04.495039 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbd6b9fa7b06d76ffe7fc201a8e7d674e8d261c5618fc8a84d4c08a89cda9426" Mar 20 16:42:04 crc kubenswrapper[4730]: I0320 16:42:04.495139 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567082-597m4" Mar 20 16:42:04 crc kubenswrapper[4730]: I0320 16:42:04.899388 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567076-5zjgq"] Mar 20 16:42:04 crc kubenswrapper[4730]: I0320 16:42:04.909455 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567076-5zjgq"] Mar 20 16:42:05 crc kubenswrapper[4730]: I0320 16:42:05.550252 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcb7d22d-4ad0-4a9c-bf00-e966d1abb051" path="/var/lib/kubelet/pods/bcb7d22d-4ad0-4a9c-bf00-e966d1abb051/volumes" Mar 20 16:42:51 crc kubenswrapper[4730]: I0320 16:42:51.012095 4730 scope.go:117] "RemoveContainer" containerID="7abe93567f97d011f8ae053e88185c6004136b63e3d5f72b19beb707014bf434" Mar 20 16:43:42 crc kubenswrapper[4730]: I0320 16:43:42.880002 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:43:42 crc kubenswrapper[4730]: I0320 16:43:42.880655 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:44:00 crc kubenswrapper[4730]: I0320 16:44:00.164491 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567084-c4mpd"] Mar 20 16:44:00 crc kubenswrapper[4730]: E0320 16:44:00.165799 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="713df8c0-cae2-4cfd-9ecf-66856a78c066" containerName="oc" Mar 20 16:44:00 crc kubenswrapper[4730]: I0320 16:44:00.165820 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="713df8c0-cae2-4cfd-9ecf-66856a78c066" containerName="oc" Mar 20 16:44:00 crc kubenswrapper[4730]: I0320 16:44:00.166168 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="713df8c0-cae2-4cfd-9ecf-66856a78c066" containerName="oc" Mar 20 16:44:00 crc kubenswrapper[4730]: I0320 16:44:00.167543 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567084-c4mpd" Mar 20 16:44:00 crc kubenswrapper[4730]: I0320 16:44:00.172203 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:44:00 crc kubenswrapper[4730]: I0320 16:44:00.175682 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:44:00 crc kubenswrapper[4730]: I0320 16:44:00.176072 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:44:00 crc kubenswrapper[4730]: I0320 16:44:00.180854 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567084-c4mpd"] Mar 20 16:44:00 crc kubenswrapper[4730]: I0320 16:44:00.303360 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sr4x\" (UniqueName: \"kubernetes.io/projected/51901671-4d27-46c5-9a9d-baf51b2b9c01-kube-api-access-6sr4x\") pod \"auto-csr-approver-29567084-c4mpd\" (UID: \"51901671-4d27-46c5-9a9d-baf51b2b9c01\") " pod="openshift-infra/auto-csr-approver-29567084-c4mpd" Mar 20 16:44:00 crc kubenswrapper[4730]: I0320 16:44:00.405576 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sr4x\" (UniqueName: \"kubernetes.io/projected/51901671-4d27-46c5-9a9d-baf51b2b9c01-kube-api-access-6sr4x\") pod \"auto-csr-approver-29567084-c4mpd\" (UID: \"51901671-4d27-46c5-9a9d-baf51b2b9c01\") " pod="openshift-infra/auto-csr-approver-29567084-c4mpd" Mar 20 16:44:00 crc kubenswrapper[4730]: I0320 16:44:00.901304 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sr4x\" (UniqueName: \"kubernetes.io/projected/51901671-4d27-46c5-9a9d-baf51b2b9c01-kube-api-access-6sr4x\") pod \"auto-csr-approver-29567084-c4mpd\" (UID: \"51901671-4d27-46c5-9a9d-baf51b2b9c01\") " pod="openshift-infra/auto-csr-approver-29567084-c4mpd" Mar 20 16:44:01 crc kubenswrapper[4730]: I0320 16:44:01.093481 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567084-c4mpd" Mar 20 16:44:01 crc kubenswrapper[4730]: I0320 16:44:01.658512 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567084-c4mpd"] Mar 20 16:44:01 crc kubenswrapper[4730]: I0320 16:44:01.729903 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567084-c4mpd" event={"ID":"51901671-4d27-46c5-9a9d-baf51b2b9c01","Type":"ContainerStarted","Data":"84df18edf9b53088b22d7c63f76d7a13c8c7254f5c750e4137ce33ff56783214"} Mar 20 16:44:03 crc kubenswrapper[4730]: I0320 16:44:03.750911 4730 generic.go:334] "Generic (PLEG): container finished" podID="51901671-4d27-46c5-9a9d-baf51b2b9c01" containerID="cbdd90e3d11772056ef45ec365a19533f01de2f8c0583c5498c86a843612b56d" exitCode=0 Mar 20 16:44:03 crc kubenswrapper[4730]: I0320 16:44:03.750967 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567084-c4mpd" event={"ID":"51901671-4d27-46c5-9a9d-baf51b2b9c01","Type":"ContainerDied","Data":"cbdd90e3d11772056ef45ec365a19533f01de2f8c0583c5498c86a843612b56d"} Mar 20 16:44:05 crc kubenswrapper[4730]: I0320 16:44:05.154887 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567084-c4mpd" Mar 20 16:44:05 crc kubenswrapper[4730]: I0320 16:44:05.308896 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sr4x\" (UniqueName: \"kubernetes.io/projected/51901671-4d27-46c5-9a9d-baf51b2b9c01-kube-api-access-6sr4x\") pod \"51901671-4d27-46c5-9a9d-baf51b2b9c01\" (UID: \"51901671-4d27-46c5-9a9d-baf51b2b9c01\") " Mar 20 16:44:05 crc kubenswrapper[4730]: I0320 16:44:05.324596 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51901671-4d27-46c5-9a9d-baf51b2b9c01-kube-api-access-6sr4x" (OuterVolumeSpecName: "kube-api-access-6sr4x") pod "51901671-4d27-46c5-9a9d-baf51b2b9c01" (UID: "51901671-4d27-46c5-9a9d-baf51b2b9c01"). InnerVolumeSpecName "kube-api-access-6sr4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:44:05 crc kubenswrapper[4730]: I0320 16:44:05.412078 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sr4x\" (UniqueName: \"kubernetes.io/projected/51901671-4d27-46c5-9a9d-baf51b2b9c01-kube-api-access-6sr4x\") on node \"crc\" DevicePath \"\"" Mar 20 16:44:05 crc kubenswrapper[4730]: I0320 16:44:05.779937 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567084-c4mpd" event={"ID":"51901671-4d27-46c5-9a9d-baf51b2b9c01","Type":"ContainerDied","Data":"84df18edf9b53088b22d7c63f76d7a13c8c7254f5c750e4137ce33ff56783214"} Mar 20 16:44:05 crc kubenswrapper[4730]: I0320 16:44:05.780274 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84df18edf9b53088b22d7c63f76d7a13c8c7254f5c750e4137ce33ff56783214" Mar 20 16:44:05 crc kubenswrapper[4730]: I0320 16:44:05.780019 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567084-c4mpd" Mar 20 16:44:06 crc kubenswrapper[4730]: I0320 16:44:06.239313 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567078-bsnqc"] Mar 20 16:44:06 crc kubenswrapper[4730]: I0320 16:44:06.252901 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567078-bsnqc"] Mar 20 16:44:07 crc kubenswrapper[4730]: I0320 16:44:07.548571 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7178ffb9-4891-485e-b3d6-d7fcc8f22ef4" path="/var/lib/kubelet/pods/7178ffb9-4891-485e-b3d6-d7fcc8f22ef4/volumes" Mar 20 16:44:12 crc kubenswrapper[4730]: I0320 16:44:12.879815 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:44:12 crc kubenswrapper[4730]: I0320 16:44:12.880275 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:44:42 crc kubenswrapper[4730]: I0320 16:44:42.879740 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:44:42 crc kubenswrapper[4730]: I0320 16:44:42.880275 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:44:42 crc kubenswrapper[4730]: I0320 16:44:42.880326 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 16:44:42 crc kubenswrapper[4730]: I0320 16:44:42.881278 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610"} pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 16:44:42 crc kubenswrapper[4730]: I0320 16:44:42.881345 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" containerID="cri-o://ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610" gracePeriod=600 Mar 20 16:44:43 crc kubenswrapper[4730]: E0320 16:44:43.002846 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:44:43 crc kubenswrapper[4730]: I0320 16:44:43.178232 4730 generic.go:334] "Generic (PLEG): container finished" podID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerID="ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610" exitCode=0 Mar 20 16:44:43 crc kubenswrapper[4730]: I0320 16:44:43.178293 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerDied","Data":"ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610"} Mar 20 16:44:43 crc kubenswrapper[4730]: I0320 16:44:43.178772 4730 scope.go:117] "RemoveContainer" containerID="941dc6e58516657b43df3ac7120bf6da060d2f3b3a6c41da62694a0c2e80f6c6" Mar 20 16:44:43 crc kubenswrapper[4730]: I0320 16:44:43.179993 4730 scope.go:117] "RemoveContainer" containerID="ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610" Mar 20 16:44:43 crc kubenswrapper[4730]: E0320 16:44:43.180643 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:44:51 crc kubenswrapper[4730]: I0320 16:44:51.115117 4730 scope.go:117] "RemoveContainer" containerID="3b2c3b7b49826d995d5d767d6f87b9a288d68f48a6bee9638eee715a068be2d7" Mar 20 16:44:55 crc kubenswrapper[4730]: I0320 16:44:55.534095 4730 scope.go:117] "RemoveContainer" containerID="ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610" Mar 20 16:44:55 crc kubenswrapper[4730]: E0320 16:44:55.535592 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:45:00 crc kubenswrapper[4730]: I0320 16:45:00.166880 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567085-cglw4"] Mar 20 16:45:00 crc kubenswrapper[4730]: E0320 16:45:00.168199 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51901671-4d27-46c5-9a9d-baf51b2b9c01" containerName="oc" Mar 20 16:45:00 crc kubenswrapper[4730]: I0320 16:45:00.168222 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="51901671-4d27-46c5-9a9d-baf51b2b9c01" containerName="oc" Mar 20 16:45:00 crc kubenswrapper[4730]: I0320 16:45:00.168598 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="51901671-4d27-46c5-9a9d-baf51b2b9c01" containerName="oc" Mar 20 16:45:00 crc kubenswrapper[4730]: I0320 16:45:00.169718 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567085-cglw4" Mar 20 16:45:00 crc kubenswrapper[4730]: I0320 16:45:00.171897 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 20 16:45:00 crc kubenswrapper[4730]: I0320 16:45:00.182687 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 20 16:45:00 crc kubenswrapper[4730]: I0320 16:45:00.202645 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567085-cglw4"] Mar 20 16:45:00 crc kubenswrapper[4730]: I0320 16:45:00.287778 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c630452-a358-4849-b036-b8cdeb19775f-secret-volume\") pod \"collect-profiles-29567085-cglw4\" (UID: \"1c630452-a358-4849-b036-b8cdeb19775f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567085-cglw4" Mar 20 16:45:00 crc kubenswrapper[4730]: I0320 16:45:00.287869 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rzhq\" (UniqueName: \"kubernetes.io/projected/1c630452-a358-4849-b036-b8cdeb19775f-kube-api-access-8rzhq\") pod \"collect-profiles-29567085-cglw4\" (UID: \"1c630452-a358-4849-b036-b8cdeb19775f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567085-cglw4" Mar 20 16:45:00 crc kubenswrapper[4730]: I0320 16:45:00.288384 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c630452-a358-4849-b036-b8cdeb19775f-config-volume\") pod \"collect-profiles-29567085-cglw4\" (UID: \"1c630452-a358-4849-b036-b8cdeb19775f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567085-cglw4" Mar 20 16:45:00 crc kubenswrapper[4730]: I0320 16:45:00.392156 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c630452-a358-4849-b036-b8cdeb19775f-secret-volume\") pod \"collect-profiles-29567085-cglw4\" (UID: \"1c630452-a358-4849-b036-b8cdeb19775f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567085-cglw4" Mar 20 16:45:00 crc kubenswrapper[4730]: I0320 16:45:00.392278 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rzhq\" (UniqueName: \"kubernetes.io/projected/1c630452-a358-4849-b036-b8cdeb19775f-kube-api-access-8rzhq\") pod \"collect-profiles-29567085-cglw4\" (UID: \"1c630452-a358-4849-b036-b8cdeb19775f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567085-cglw4" Mar 20 16:45:00 crc kubenswrapper[4730]: I0320 16:45:00.392419 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c630452-a358-4849-b036-b8cdeb19775f-config-volume\") pod \"collect-profiles-29567085-cglw4\" (UID: \"1c630452-a358-4849-b036-b8cdeb19775f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567085-cglw4" Mar 20 16:45:00 crc kubenswrapper[4730]: I0320 16:45:00.393698 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c630452-a358-4849-b036-b8cdeb19775f-config-volume\") pod \"collect-profiles-29567085-cglw4\" (UID: \"1c630452-a358-4849-b036-b8cdeb19775f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567085-cglw4" Mar 20 16:45:00 crc kubenswrapper[4730]: I0320 16:45:00.410695 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c630452-a358-4849-b036-b8cdeb19775f-secret-volume\") pod \"collect-profiles-29567085-cglw4\" (UID: \"1c630452-a358-4849-b036-b8cdeb19775f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567085-cglw4" Mar 20 16:45:00 crc kubenswrapper[4730]: I0320 16:45:00.412500 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rzhq\" (UniqueName: \"kubernetes.io/projected/1c630452-a358-4849-b036-b8cdeb19775f-kube-api-access-8rzhq\") pod \"collect-profiles-29567085-cglw4\" (UID: \"1c630452-a358-4849-b036-b8cdeb19775f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567085-cglw4" Mar 20 16:45:00 crc kubenswrapper[4730]: I0320 16:45:00.508181 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567085-cglw4" Mar 20 16:45:01 crc kubenswrapper[4730]: I0320 16:45:01.015649 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567085-cglw4"] Mar 20 16:45:01 crc kubenswrapper[4730]: I0320 16:45:01.364122 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567085-cglw4" event={"ID":"1c630452-a358-4849-b036-b8cdeb19775f","Type":"ContainerStarted","Data":"2f6293be6ccbad3a3176b9b41c9c1955e0305b0804646ead1e4d478ea3563234"} Mar 20 16:45:01 crc kubenswrapper[4730]: I0320 16:45:01.364179 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567085-cglw4" event={"ID":"1c630452-a358-4849-b036-b8cdeb19775f","Type":"ContainerStarted","Data":"74ea47e100eaace17e602b1d49e86e12d719d06c6e1e869935b40a8d6e8d1499"} Mar 20 16:45:02 crc kubenswrapper[4730]: I0320 16:45:02.375770 4730 generic.go:334] "Generic (PLEG): container finished" podID="1c630452-a358-4849-b036-b8cdeb19775f" containerID="2f6293be6ccbad3a3176b9b41c9c1955e0305b0804646ead1e4d478ea3563234" exitCode=0 Mar 20 16:45:02 crc kubenswrapper[4730]: I0320 16:45:02.375911 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567085-cglw4" event={"ID":"1c630452-a358-4849-b036-b8cdeb19775f","Type":"ContainerDied","Data":"2f6293be6ccbad3a3176b9b41c9c1955e0305b0804646ead1e4d478ea3563234"} Mar 20 16:45:02 crc kubenswrapper[4730]: I0320 16:45:02.809521 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567085-cglw4" Mar 20 16:45:02 crc kubenswrapper[4730]: I0320 16:45:02.949285 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c630452-a358-4849-b036-b8cdeb19775f-config-volume\") pod \"1c630452-a358-4849-b036-b8cdeb19775f\" (UID: \"1c630452-a358-4849-b036-b8cdeb19775f\") " Mar 20 16:45:02 crc kubenswrapper[4730]: I0320 16:45:02.949350 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c630452-a358-4849-b036-b8cdeb19775f-secret-volume\") pod \"1c630452-a358-4849-b036-b8cdeb19775f\" (UID: \"1c630452-a358-4849-b036-b8cdeb19775f\") " Mar 20 16:45:02 crc kubenswrapper[4730]: I0320 16:45:02.949429 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rzhq\" (UniqueName: \"kubernetes.io/projected/1c630452-a358-4849-b036-b8cdeb19775f-kube-api-access-8rzhq\") pod \"1c630452-a358-4849-b036-b8cdeb19775f\" (UID: \"1c630452-a358-4849-b036-b8cdeb19775f\") " Mar 20 16:45:02 crc kubenswrapper[4730]: I0320 16:45:02.950096 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c630452-a358-4849-b036-b8cdeb19775f-config-volume" (OuterVolumeSpecName: "config-volume") pod "1c630452-a358-4849-b036-b8cdeb19775f" (UID: "1c630452-a358-4849-b036-b8cdeb19775f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 16:45:02 crc kubenswrapper[4730]: I0320 16:45:02.955332 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c630452-a358-4849-b036-b8cdeb19775f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1c630452-a358-4849-b036-b8cdeb19775f" (UID: "1c630452-a358-4849-b036-b8cdeb19775f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 16:45:02 crc kubenswrapper[4730]: I0320 16:45:02.955555 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c630452-a358-4849-b036-b8cdeb19775f-kube-api-access-8rzhq" (OuterVolumeSpecName: "kube-api-access-8rzhq") pod "1c630452-a358-4849-b036-b8cdeb19775f" (UID: "1c630452-a358-4849-b036-b8cdeb19775f"). InnerVolumeSpecName "kube-api-access-8rzhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:45:03 crc kubenswrapper[4730]: I0320 16:45:03.052630 4730 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c630452-a358-4849-b036-b8cdeb19775f-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 16:45:03 crc kubenswrapper[4730]: I0320 16:45:03.052678 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rzhq\" (UniqueName: \"kubernetes.io/projected/1c630452-a358-4849-b036-b8cdeb19775f-kube-api-access-8rzhq\") on node \"crc\" DevicePath \"\"" Mar 20 16:45:03 crc kubenswrapper[4730]: I0320 16:45:03.052691 4730 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c630452-a358-4849-b036-b8cdeb19775f-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 16:45:03 crc kubenswrapper[4730]: I0320 16:45:03.389850 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567085-cglw4" event={"ID":"1c630452-a358-4849-b036-b8cdeb19775f","Type":"ContainerDied","Data":"74ea47e100eaace17e602b1d49e86e12d719d06c6e1e869935b40a8d6e8d1499"} Mar 20 16:45:03 crc kubenswrapper[4730]: I0320 16:45:03.389893 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74ea47e100eaace17e602b1d49e86e12d719d06c6e1e869935b40a8d6e8d1499" Mar 20 16:45:03 crc kubenswrapper[4730]: I0320 16:45:03.389898 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567085-cglw4" Mar 20 16:45:03 crc kubenswrapper[4730]: I0320 16:45:03.912628 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567040-cz69b"] Mar 20 16:45:03 crc kubenswrapper[4730]: I0320 16:45:03.925207 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567040-cz69b"] Mar 20 16:45:05 crc kubenswrapper[4730]: I0320 16:45:05.547062 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="672cfda1-2ec8-41fe-b3dc-eabe4e60726d" path="/var/lib/kubelet/pods/672cfda1-2ec8-41fe-b3dc-eabe4e60726d/volumes" Mar 20 16:45:07 crc kubenswrapper[4730]: I0320 16:45:07.533018 4730 scope.go:117] "RemoveContainer" containerID="ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610" Mar 20 16:45:07 crc kubenswrapper[4730]: E0320 16:45:07.533787 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:45:19 crc kubenswrapper[4730]: I0320 16:45:19.533627 4730 scope.go:117] "RemoveContainer" containerID="ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610" Mar 20 16:45:19 crc kubenswrapper[4730]: E0320 16:45:19.535122 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:45:32 crc kubenswrapper[4730]: I0320 16:45:32.532738 4730 scope.go:117] "RemoveContainer" containerID="ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610" Mar 20 16:45:32 crc kubenswrapper[4730]: E0320 16:45:32.533549 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:45:45 crc kubenswrapper[4730]: I0320 16:45:45.533522 4730 scope.go:117] "RemoveContainer" containerID="ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610" Mar 20 16:45:45 crc kubenswrapper[4730]: E0320 16:45:45.535331 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:45:51 crc kubenswrapper[4730]: I0320 16:45:51.208907 4730 scope.go:117] "RemoveContainer" containerID="aa12014b37ee0e01204777f8c797059805894b107ea52ba01e8a5d24299b55a5" Mar 20 16:45:57 crc kubenswrapper[4730]: I0320 16:45:57.537391 4730 scope.go:117] "RemoveContainer" containerID="ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610" Mar 20 16:45:57 crc kubenswrapper[4730]: E0320 16:45:57.538239 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:45:58 crc kubenswrapper[4730]: I0320 16:45:58.907667 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-64spl"] Mar 20 16:45:58 crc kubenswrapper[4730]: E0320 16:45:58.908377 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c630452-a358-4849-b036-b8cdeb19775f" containerName="collect-profiles" Mar 20 16:45:58 crc kubenswrapper[4730]: I0320 16:45:58.908389 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c630452-a358-4849-b036-b8cdeb19775f" containerName="collect-profiles" Mar 20 16:45:58 crc kubenswrapper[4730]: I0320 16:45:58.908611 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c630452-a358-4849-b036-b8cdeb19775f" containerName="collect-profiles" Mar 20 16:45:58 crc kubenswrapper[4730]: I0320 16:45:58.910008 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-64spl" Mar 20 16:45:58 crc kubenswrapper[4730]: I0320 16:45:58.924717 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-64spl"] Mar 20 16:45:58 crc kubenswrapper[4730]: I0320 16:45:58.997537 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c95ee43-d4bb-471d-977a-4cb14f99a03e-utilities\") pod \"redhat-marketplace-64spl\" (UID: \"3c95ee43-d4bb-471d-977a-4cb14f99a03e\") " pod="openshift-marketplace/redhat-marketplace-64spl" Mar 20 16:45:58 crc kubenswrapper[4730]: I0320 16:45:58.997728 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s9p2\" (UniqueName: \"kubernetes.io/projected/3c95ee43-d4bb-471d-977a-4cb14f99a03e-kube-api-access-9s9p2\") pod \"redhat-marketplace-64spl\" (UID: \"3c95ee43-d4bb-471d-977a-4cb14f99a03e\") " pod="openshift-marketplace/redhat-marketplace-64spl" Mar 20 16:45:58 crc kubenswrapper[4730]: I0320 16:45:58.997930 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c95ee43-d4bb-471d-977a-4cb14f99a03e-catalog-content\") pod \"redhat-marketplace-64spl\" (UID: \"3c95ee43-d4bb-471d-977a-4cb14f99a03e\") " pod="openshift-marketplace/redhat-marketplace-64spl" Mar 20 16:45:59 crc kubenswrapper[4730]: I0320 16:45:59.100527 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c95ee43-d4bb-471d-977a-4cb14f99a03e-catalog-content\") pod \"redhat-marketplace-64spl\" (UID: \"3c95ee43-d4bb-471d-977a-4cb14f99a03e\") " pod="openshift-marketplace/redhat-marketplace-64spl" Mar 20 16:45:59 crc kubenswrapper[4730]: I0320 16:45:59.100789 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c95ee43-d4bb-471d-977a-4cb14f99a03e-utilities\") pod \"redhat-marketplace-64spl\" (UID: \"3c95ee43-d4bb-471d-977a-4cb14f99a03e\") " pod="openshift-marketplace/redhat-marketplace-64spl" Mar 20 16:45:59 crc kubenswrapper[4730]: I0320 16:45:59.100851 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s9p2\" (UniqueName: \"kubernetes.io/projected/3c95ee43-d4bb-471d-977a-4cb14f99a03e-kube-api-access-9s9p2\") pod \"redhat-marketplace-64spl\" (UID: \"3c95ee43-d4bb-471d-977a-4cb14f99a03e\") " pod="openshift-marketplace/redhat-marketplace-64spl" Mar 20 16:45:59 crc kubenswrapper[4730]: I0320 16:45:59.101586 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c95ee43-d4bb-471d-977a-4cb14f99a03e-utilities\") pod \"redhat-marketplace-64spl\" (UID: \"3c95ee43-d4bb-471d-977a-4cb14f99a03e\") " pod="openshift-marketplace/redhat-marketplace-64spl" Mar 20 16:45:59 crc kubenswrapper[4730]: I0320 16:45:59.102413 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c95ee43-d4bb-471d-977a-4cb14f99a03e-catalog-content\") pod \"redhat-marketplace-64spl\" (UID: \"3c95ee43-d4bb-471d-977a-4cb14f99a03e\") " pod="openshift-marketplace/redhat-marketplace-64spl" Mar 20 16:45:59 crc kubenswrapper[4730]: I0320 16:45:59.125568 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s9p2\" (UniqueName: \"kubernetes.io/projected/3c95ee43-d4bb-471d-977a-4cb14f99a03e-kube-api-access-9s9p2\") pod \"redhat-marketplace-64spl\" (UID: \"3c95ee43-d4bb-471d-977a-4cb14f99a03e\") " pod="openshift-marketplace/redhat-marketplace-64spl" Mar 20 16:45:59 crc kubenswrapper[4730]: I0320 16:45:59.237104 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-64spl" Mar 20 16:45:59 crc kubenswrapper[4730]: I0320 16:45:59.712208 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-64spl"] Mar 20 16:45:59 crc kubenswrapper[4730]: W0320 16:45:59.715216 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c95ee43_d4bb_471d_977a_4cb14f99a03e.slice/crio-959b6f691ded28673442a436f57cb1d08ee3d5669b036068fbb5bb7dcb1750ab WatchSource:0}: Error finding container 959b6f691ded28673442a436f57cb1d08ee3d5669b036068fbb5bb7dcb1750ab: Status 404 returned error can't find the container with id 959b6f691ded28673442a436f57cb1d08ee3d5669b036068fbb5bb7dcb1750ab Mar 20 16:46:00 crc kubenswrapper[4730]: I0320 16:46:00.148204 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567086-bgtzj"] Mar 20 16:46:00 crc kubenswrapper[4730]: I0320 16:46:00.150443 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567086-bgtzj" Mar 20 16:46:00 crc kubenswrapper[4730]: I0320 16:46:00.156072 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:46:00 crc kubenswrapper[4730]: I0320 16:46:00.156103 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:46:00 crc kubenswrapper[4730]: I0320 16:46:00.156145 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:46:00 crc kubenswrapper[4730]: I0320 16:46:00.159219 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567086-bgtzj"] Mar 20 16:46:00 crc kubenswrapper[4730]: I0320 16:46:00.176905 4730 generic.go:334] "Generic (PLEG): container finished" podID="3c95ee43-d4bb-471d-977a-4cb14f99a03e" containerID="bb084137d5918afdeb931c3364180027f66610bb94a9b283e99cdd6522173e0b" exitCode=0 Mar 20 16:46:00 crc kubenswrapper[4730]: I0320 16:46:00.176974 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64spl" event={"ID":"3c95ee43-d4bb-471d-977a-4cb14f99a03e","Type":"ContainerDied","Data":"bb084137d5918afdeb931c3364180027f66610bb94a9b283e99cdd6522173e0b"} Mar 20 16:46:00 crc kubenswrapper[4730]: I0320 16:46:00.177070 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64spl" event={"ID":"3c95ee43-d4bb-471d-977a-4cb14f99a03e","Type":"ContainerStarted","Data":"959b6f691ded28673442a436f57cb1d08ee3d5669b036068fbb5bb7dcb1750ab"} Mar 20 16:46:00 crc kubenswrapper[4730]: I0320 16:46:00.178716 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 16:46:00 crc kubenswrapper[4730]: I0320 16:46:00.223753 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pw49\" (UniqueName: \"kubernetes.io/projected/1598084b-3967-4d5d-8911-87b4bbf10965-kube-api-access-8pw49\") pod \"auto-csr-approver-29567086-bgtzj\" (UID: \"1598084b-3967-4d5d-8911-87b4bbf10965\") " pod="openshift-infra/auto-csr-approver-29567086-bgtzj" Mar 20 16:46:00 crc kubenswrapper[4730]: I0320 16:46:00.325735 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pw49\" (UniqueName: \"kubernetes.io/projected/1598084b-3967-4d5d-8911-87b4bbf10965-kube-api-access-8pw49\") pod \"auto-csr-approver-29567086-bgtzj\" (UID: \"1598084b-3967-4d5d-8911-87b4bbf10965\") " pod="openshift-infra/auto-csr-approver-29567086-bgtzj" Mar 20 16:46:00 crc kubenswrapper[4730]: I0320 16:46:00.346422 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pw49\" (UniqueName: \"kubernetes.io/projected/1598084b-3967-4d5d-8911-87b4bbf10965-kube-api-access-8pw49\") pod \"auto-csr-approver-29567086-bgtzj\" (UID: \"1598084b-3967-4d5d-8911-87b4bbf10965\") " pod="openshift-infra/auto-csr-approver-29567086-bgtzj" Mar 20 16:46:00 crc kubenswrapper[4730]: I0320 16:46:00.472683 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567086-bgtzj" Mar 20 16:46:00 crc kubenswrapper[4730]: I0320 16:46:00.932497 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567086-bgtzj"] Mar 20 16:46:00 crc kubenswrapper[4730]: W0320 16:46:00.955781 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1598084b_3967_4d5d_8911_87b4bbf10965.slice/crio-6109ce38a6660db0e0a833021a4b531135042d1a4f46fc4d4a324f3c95359edd WatchSource:0}: Error finding container 6109ce38a6660db0e0a833021a4b531135042d1a4f46fc4d4a324f3c95359edd: Status 404 returned error can't find the container with id 6109ce38a6660db0e0a833021a4b531135042d1a4f46fc4d4a324f3c95359edd Mar 20 16:46:01 crc kubenswrapper[4730]: I0320 16:46:01.189598 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64spl" event={"ID":"3c95ee43-d4bb-471d-977a-4cb14f99a03e","Type":"ContainerStarted","Data":"c380a914b21ad400e0c8ebd69b48a7f3bfef4e58619cb65f94f2903eccb23840"} Mar 20 16:46:01 crc kubenswrapper[4730]: I0320 16:46:01.192916 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567086-bgtzj" event={"ID":"1598084b-3967-4d5d-8911-87b4bbf10965","Type":"ContainerStarted","Data":"6109ce38a6660db0e0a833021a4b531135042d1a4f46fc4d4a324f3c95359edd"} Mar 20 16:46:03 crc kubenswrapper[4730]: I0320 16:46:03.211684 4730 generic.go:334] "Generic (PLEG): container finished" podID="3c95ee43-d4bb-471d-977a-4cb14f99a03e" containerID="c380a914b21ad400e0c8ebd69b48a7f3bfef4e58619cb65f94f2903eccb23840" exitCode=0 Mar 20 16:46:03 crc kubenswrapper[4730]: I0320 16:46:03.211727 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64spl" event={"ID":"3c95ee43-d4bb-471d-977a-4cb14f99a03e","Type":"ContainerDied","Data":"c380a914b21ad400e0c8ebd69b48a7f3bfef4e58619cb65f94f2903eccb23840"} Mar 20 16:46:03 crc kubenswrapper[4730]: I0320 16:46:03.214109 4730 generic.go:334] "Generic (PLEG): container finished" podID="1598084b-3967-4d5d-8911-87b4bbf10965" containerID="090324d1acddd1e29456802f46699a7cfabedaef8f848dbdf774851d4687bf7f" exitCode=0 Mar 20 16:46:03 crc kubenswrapper[4730]: I0320 16:46:03.214132 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567086-bgtzj" event={"ID":"1598084b-3967-4d5d-8911-87b4bbf10965","Type":"ContainerDied","Data":"090324d1acddd1e29456802f46699a7cfabedaef8f848dbdf774851d4687bf7f"} Mar 20 16:46:04 crc kubenswrapper[4730]: I0320 16:46:04.228619 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64spl" event={"ID":"3c95ee43-d4bb-471d-977a-4cb14f99a03e","Type":"ContainerStarted","Data":"08a3a7a763d29be2ad327ffc0a70e98c7747477d348e5faafbeb00384d3d3088"} Mar 20 16:46:04 crc kubenswrapper[4730]: I0320 16:46:04.260734 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-64spl" podStartSLOduration=2.859321607 podStartE2EDuration="6.260716992s" podCreationTimestamp="2026-03-20 16:45:58 +0000 UTC" firstStartedPulling="2026-03-20 16:46:00.178512269 +0000 UTC m=+4019.391883638" lastFinishedPulling="2026-03-20 16:46:03.579907654 +0000 UTC m=+4022.793279023" observedRunningTime="2026-03-20 16:46:04.251390157 +0000 UTC m=+4023.464761516" watchObservedRunningTime="2026-03-20 16:46:04.260716992 +0000 UTC m=+4023.474088361" Mar 20 16:46:04 crc kubenswrapper[4730]: I0320 16:46:04.578013 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567086-bgtzj" Mar 20 16:46:04 crc kubenswrapper[4730]: I0320 16:46:04.616267 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pw49\" (UniqueName: \"kubernetes.io/projected/1598084b-3967-4d5d-8911-87b4bbf10965-kube-api-access-8pw49\") pod \"1598084b-3967-4d5d-8911-87b4bbf10965\" (UID: \"1598084b-3967-4d5d-8911-87b4bbf10965\") " Mar 20 16:46:04 crc kubenswrapper[4730]: I0320 16:46:04.622796 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1598084b-3967-4d5d-8911-87b4bbf10965-kube-api-access-8pw49" (OuterVolumeSpecName: "kube-api-access-8pw49") pod "1598084b-3967-4d5d-8911-87b4bbf10965" (UID: "1598084b-3967-4d5d-8911-87b4bbf10965"). InnerVolumeSpecName "kube-api-access-8pw49". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:46:04 crc kubenswrapper[4730]: I0320 16:46:04.718963 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pw49\" (UniqueName: \"kubernetes.io/projected/1598084b-3967-4d5d-8911-87b4bbf10965-kube-api-access-8pw49\") on node \"crc\" DevicePath \"\"" Mar 20 16:46:05 crc kubenswrapper[4730]: I0320 16:46:05.243201 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567086-bgtzj" event={"ID":"1598084b-3967-4d5d-8911-87b4bbf10965","Type":"ContainerDied","Data":"6109ce38a6660db0e0a833021a4b531135042d1a4f46fc4d4a324f3c95359edd"} Mar 20 16:46:05 crc kubenswrapper[4730]: I0320 16:46:05.243272 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6109ce38a6660db0e0a833021a4b531135042d1a4f46fc4d4a324f3c95359edd" Mar 20 16:46:05 crc kubenswrapper[4730]: I0320 16:46:05.243338 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567086-bgtzj" Mar 20 16:46:05 crc kubenswrapper[4730]: I0320 16:46:05.671829 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567080-445j8"] Mar 20 16:46:05 crc kubenswrapper[4730]: I0320 16:46:05.692293 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567080-445j8"] Mar 20 16:46:07 crc kubenswrapper[4730]: I0320 16:46:07.397409 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qgqdz"] Mar 20 16:46:07 crc kubenswrapper[4730]: E0320 16:46:07.398466 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1598084b-3967-4d5d-8911-87b4bbf10965" containerName="oc" Mar 20 16:46:07 crc kubenswrapper[4730]: I0320 16:46:07.398490 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="1598084b-3967-4d5d-8911-87b4bbf10965" containerName="oc" Mar 20 16:46:07 crc kubenswrapper[4730]: I0320 16:46:07.398891 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="1598084b-3967-4d5d-8911-87b4bbf10965" containerName="oc" Mar 20 16:46:07 crc kubenswrapper[4730]: I0320 16:46:07.401498 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgqdz" Mar 20 16:46:07 crc kubenswrapper[4730]: I0320 16:46:07.406187 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qgqdz"] Mar 20 16:46:07 crc kubenswrapper[4730]: I0320 16:46:07.472736 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfng9\" (UniqueName: \"kubernetes.io/projected/64b3980f-0b4c-4751-9152-a70ce47eca6a-kube-api-access-dfng9\") pod \"community-operators-qgqdz\" (UID: \"64b3980f-0b4c-4751-9152-a70ce47eca6a\") " pod="openshift-marketplace/community-operators-qgqdz" Mar 20 16:46:07 crc kubenswrapper[4730]: I0320 16:46:07.472844 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64b3980f-0b4c-4751-9152-a70ce47eca6a-catalog-content\") pod \"community-operators-qgqdz\" (UID: \"64b3980f-0b4c-4751-9152-a70ce47eca6a\") " pod="openshift-marketplace/community-operators-qgqdz" Mar 20 16:46:07 crc kubenswrapper[4730]: I0320 16:46:07.472909 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64b3980f-0b4c-4751-9152-a70ce47eca6a-utilities\") pod \"community-operators-qgqdz\" (UID: \"64b3980f-0b4c-4751-9152-a70ce47eca6a\") " pod="openshift-marketplace/community-operators-qgqdz" Mar 20 16:46:07 crc kubenswrapper[4730]: I0320 16:46:07.546571 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b8e3816-7082-4505-847c-880b40d33930" path="/var/lib/kubelet/pods/9b8e3816-7082-4505-847c-880b40d33930/volumes" Mar 20 16:46:07 crc kubenswrapper[4730]: I0320 16:46:07.574786 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfng9\" (UniqueName: \"kubernetes.io/projected/64b3980f-0b4c-4751-9152-a70ce47eca6a-kube-api-access-dfng9\") pod \"community-operators-qgqdz\" (UID: \"64b3980f-0b4c-4751-9152-a70ce47eca6a\") " pod="openshift-marketplace/community-operators-qgqdz" Mar 20 16:46:07 crc kubenswrapper[4730]: I0320 16:46:07.575209 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64b3980f-0b4c-4751-9152-a70ce47eca6a-catalog-content\") pod \"community-operators-qgqdz\" (UID: \"64b3980f-0b4c-4751-9152-a70ce47eca6a\") " pod="openshift-marketplace/community-operators-qgqdz" Mar 20 16:46:07 crc kubenswrapper[4730]: I0320 16:46:07.575446 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64b3980f-0b4c-4751-9152-a70ce47eca6a-utilities\") pod \"community-operators-qgqdz\" (UID: \"64b3980f-0b4c-4751-9152-a70ce47eca6a\") " pod="openshift-marketplace/community-operators-qgqdz" Mar 20 16:46:07 crc kubenswrapper[4730]: I0320 16:46:07.576566 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64b3980f-0b4c-4751-9152-a70ce47eca6a-catalog-content\") pod \"community-operators-qgqdz\" (UID: \"64b3980f-0b4c-4751-9152-a70ce47eca6a\") " pod="openshift-marketplace/community-operators-qgqdz" Mar 20 16:46:07 crc kubenswrapper[4730]: I0320 16:46:07.576610 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64b3980f-0b4c-4751-9152-a70ce47eca6a-utilities\") pod \"community-operators-qgqdz\" (UID: \"64b3980f-0b4c-4751-9152-a70ce47eca6a\") " pod="openshift-marketplace/community-operators-qgqdz" Mar 20 16:46:07 crc kubenswrapper[4730]: I0320 16:46:07.602215 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfng9\" (UniqueName: \"kubernetes.io/projected/64b3980f-0b4c-4751-9152-a70ce47eca6a-kube-api-access-dfng9\") pod \"community-operators-qgqdz\" (UID: \"64b3980f-0b4c-4751-9152-a70ce47eca6a\") " pod="openshift-marketplace/community-operators-qgqdz" Mar 20 16:46:07 crc kubenswrapper[4730]: I0320 16:46:07.744448 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgqdz" Mar 20 16:46:08 crc kubenswrapper[4730]: W0320 16:46:08.242328 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64b3980f_0b4c_4751_9152_a70ce47eca6a.slice/crio-007c95ca65feb4b29413d4a25914d3478366cc965bcb7c6e91b5caf3907ff6e4 WatchSource:0}: Error finding container 007c95ca65feb4b29413d4a25914d3478366cc965bcb7c6e91b5caf3907ff6e4: Status 404 returned error can't find the container with id 007c95ca65feb4b29413d4a25914d3478366cc965bcb7c6e91b5caf3907ff6e4 Mar 20 16:46:08 crc kubenswrapper[4730]: I0320 16:46:08.263001 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qgqdz"] Mar 20 16:46:08 crc kubenswrapper[4730]: I0320 16:46:08.274640 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgqdz" event={"ID":"64b3980f-0b4c-4751-9152-a70ce47eca6a","Type":"ContainerStarted","Data":"007c95ca65feb4b29413d4a25914d3478366cc965bcb7c6e91b5caf3907ff6e4"} Mar 20 16:46:08 crc kubenswrapper[4730]: I0320 16:46:08.533552 4730 scope.go:117] "RemoveContainer" containerID="ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610" Mar 20 16:46:08 crc kubenswrapper[4730]: E0320 16:46:08.534053 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:46:09 crc kubenswrapper[4730]: I0320 16:46:09.238733 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-64spl" Mar 20 16:46:09 crc kubenswrapper[4730]: I0320 16:46:09.239071 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-64spl" Mar 20 16:46:09 crc kubenswrapper[4730]: I0320 16:46:09.287293 4730 generic.go:334] "Generic (PLEG): container finished" podID="64b3980f-0b4c-4751-9152-a70ce47eca6a" containerID="086dd9b660f91623a636ce9b3aee0e5600fcc4ff44b302c02e3e1d6ab2c57e02" exitCode=0 Mar 20 16:46:09 crc kubenswrapper[4730]: I0320 16:46:09.287356 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgqdz" event={"ID":"64b3980f-0b4c-4751-9152-a70ce47eca6a","Type":"ContainerDied","Data":"086dd9b660f91623a636ce9b3aee0e5600fcc4ff44b302c02e3e1d6ab2c57e02"} Mar 20 16:46:09 crc kubenswrapper[4730]: I0320 16:46:09.300584 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-64spl" Mar 20 16:46:09 crc kubenswrapper[4730]: I0320 16:46:09.353232 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-64spl" Mar 20 16:46:11 crc kubenswrapper[4730]: I0320 16:46:11.313507 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgqdz" event={"ID":"64b3980f-0b4c-4751-9152-a70ce47eca6a","Type":"ContainerStarted","Data":"ea1300f4a3d61efadbaf33ca42f5d74a7b4fece1736b7ee2f5ee762ab6d393a0"} Mar 20 16:46:11 crc kubenswrapper[4730]: I0320 16:46:11.565136 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-64spl"] Mar 20 16:46:11 crc kubenswrapper[4730]: I0320 16:46:11.565385 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-64spl" podUID="3c95ee43-d4bb-471d-977a-4cb14f99a03e" containerName="registry-server" containerID="cri-o://08a3a7a763d29be2ad327ffc0a70e98c7747477d348e5faafbeb00384d3d3088" gracePeriod=2 Mar 20 16:46:12 crc kubenswrapper[4730]: I0320 16:46:12.172878 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-64spl" Mar 20 16:46:12 crc kubenswrapper[4730]: I0320 16:46:12.325975 4730 generic.go:334] "Generic (PLEG): container finished" podID="3c95ee43-d4bb-471d-977a-4cb14f99a03e" containerID="08a3a7a763d29be2ad327ffc0a70e98c7747477d348e5faafbeb00384d3d3088" exitCode=0 Mar 20 16:46:12 crc kubenswrapper[4730]: I0320 16:46:12.326019 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64spl" event={"ID":"3c95ee43-d4bb-471d-977a-4cb14f99a03e","Type":"ContainerDied","Data":"08a3a7a763d29be2ad327ffc0a70e98c7747477d348e5faafbeb00384d3d3088"} Mar 20 16:46:12 crc kubenswrapper[4730]: I0320 16:46:12.326065 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64spl" event={"ID":"3c95ee43-d4bb-471d-977a-4cb14f99a03e","Type":"ContainerDied","Data":"959b6f691ded28673442a436f57cb1d08ee3d5669b036068fbb5bb7dcb1750ab"} Mar 20 16:46:12 crc kubenswrapper[4730]: I0320 16:46:12.326071 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-64spl" Mar 20 16:46:12 crc kubenswrapper[4730]: I0320 16:46:12.326090 4730 scope.go:117] "RemoveContainer" containerID="08a3a7a763d29be2ad327ffc0a70e98c7747477d348e5faafbeb00384d3d3088" Mar 20 16:46:12 crc kubenswrapper[4730]: I0320 16:46:12.333160 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c95ee43-d4bb-471d-977a-4cb14f99a03e-utilities\") pod \"3c95ee43-d4bb-471d-977a-4cb14f99a03e\" (UID: \"3c95ee43-d4bb-471d-977a-4cb14f99a03e\") " Mar 20 16:46:12 crc kubenswrapper[4730]: I0320 16:46:12.333264 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s9p2\" (UniqueName: \"kubernetes.io/projected/3c95ee43-d4bb-471d-977a-4cb14f99a03e-kube-api-access-9s9p2\") pod \"3c95ee43-d4bb-471d-977a-4cb14f99a03e\" (UID: \"3c95ee43-d4bb-471d-977a-4cb14f99a03e\") " Mar 20 16:46:12 crc kubenswrapper[4730]: I0320 16:46:12.333400 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c95ee43-d4bb-471d-977a-4cb14f99a03e-catalog-content\") pod \"3c95ee43-d4bb-471d-977a-4cb14f99a03e\" (UID: \"3c95ee43-d4bb-471d-977a-4cb14f99a03e\") " Mar 20 16:46:12 crc kubenswrapper[4730]: I0320 16:46:12.339759 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c95ee43-d4bb-471d-977a-4cb14f99a03e-utilities" (OuterVolumeSpecName: "utilities") pod "3c95ee43-d4bb-471d-977a-4cb14f99a03e" (UID: "3c95ee43-d4bb-471d-977a-4cb14f99a03e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:46:12 crc kubenswrapper[4730]: I0320 16:46:12.346726 4730 scope.go:117] "RemoveContainer" containerID="c380a914b21ad400e0c8ebd69b48a7f3bfef4e58619cb65f94f2903eccb23840" Mar 20 16:46:12 crc kubenswrapper[4730]: I0320 16:46:12.347493 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c95ee43-d4bb-471d-977a-4cb14f99a03e-kube-api-access-9s9p2" (OuterVolumeSpecName: "kube-api-access-9s9p2") pod "3c95ee43-d4bb-471d-977a-4cb14f99a03e" (UID: "3c95ee43-d4bb-471d-977a-4cb14f99a03e"). InnerVolumeSpecName "kube-api-access-9s9p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:46:12 crc kubenswrapper[4730]: I0320 16:46:12.389359 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c95ee43-d4bb-471d-977a-4cb14f99a03e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c95ee43-d4bb-471d-977a-4cb14f99a03e" (UID: "3c95ee43-d4bb-471d-977a-4cb14f99a03e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:46:12 crc kubenswrapper[4730]: I0320 16:46:12.431096 4730 scope.go:117] "RemoveContainer" containerID="bb084137d5918afdeb931c3364180027f66610bb94a9b283e99cdd6522173e0b" Mar 20 16:46:12 crc kubenswrapper[4730]: I0320 16:46:12.437010 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c95ee43-d4bb-471d-977a-4cb14f99a03e-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 16:46:12 crc kubenswrapper[4730]: I0320 16:46:12.437044 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9s9p2\" (UniqueName: \"kubernetes.io/projected/3c95ee43-d4bb-471d-977a-4cb14f99a03e-kube-api-access-9s9p2\") on node \"crc\" DevicePath \"\"" Mar 20 16:46:12 crc kubenswrapper[4730]: I0320 16:46:12.437081 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c95ee43-d4bb-471d-977a-4cb14f99a03e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 16:46:12 crc kubenswrapper[4730]: I0320 16:46:12.523847 4730 scope.go:117] "RemoveContainer" containerID="08a3a7a763d29be2ad327ffc0a70e98c7747477d348e5faafbeb00384d3d3088" Mar 20 16:46:12 crc kubenswrapper[4730]: E0320 16:46:12.524715 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08a3a7a763d29be2ad327ffc0a70e98c7747477d348e5faafbeb00384d3d3088\": container with ID starting with 08a3a7a763d29be2ad327ffc0a70e98c7747477d348e5faafbeb00384d3d3088 not found: ID does not exist" containerID="08a3a7a763d29be2ad327ffc0a70e98c7747477d348e5faafbeb00384d3d3088" Mar 20 16:46:12 crc kubenswrapper[4730]: I0320 16:46:12.524770 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08a3a7a763d29be2ad327ffc0a70e98c7747477d348e5faafbeb00384d3d3088"} err="failed to get container status \"08a3a7a763d29be2ad327ffc0a70e98c7747477d348e5faafbeb00384d3d3088\": rpc error: code = NotFound desc = could not find container \"08a3a7a763d29be2ad327ffc0a70e98c7747477d348e5faafbeb00384d3d3088\": container with ID starting with 08a3a7a763d29be2ad327ffc0a70e98c7747477d348e5faafbeb00384d3d3088 not found: ID does not exist" Mar 20 16:46:12 crc kubenswrapper[4730]: I0320 16:46:12.524796 4730 scope.go:117] "RemoveContainer" containerID="c380a914b21ad400e0c8ebd69b48a7f3bfef4e58619cb65f94f2903eccb23840" Mar 20 16:46:12 crc kubenswrapper[4730]: E0320 16:46:12.525306 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c380a914b21ad400e0c8ebd69b48a7f3bfef4e58619cb65f94f2903eccb23840\": container with ID starting with c380a914b21ad400e0c8ebd69b48a7f3bfef4e58619cb65f94f2903eccb23840 not found: ID does not exist" containerID="c380a914b21ad400e0c8ebd69b48a7f3bfef4e58619cb65f94f2903eccb23840" Mar 20 16:46:12 crc kubenswrapper[4730]: I0320 16:46:12.525345 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c380a914b21ad400e0c8ebd69b48a7f3bfef4e58619cb65f94f2903eccb23840"} err="failed to get container status \"c380a914b21ad400e0c8ebd69b48a7f3bfef4e58619cb65f94f2903eccb23840\": rpc error: code = NotFound desc = could not find container \"c380a914b21ad400e0c8ebd69b48a7f3bfef4e58619cb65f94f2903eccb23840\": container with ID starting with c380a914b21ad400e0c8ebd69b48a7f3bfef4e58619cb65f94f2903eccb23840 not found: ID does not exist" Mar 20 16:46:12 crc kubenswrapper[4730]: I0320 16:46:12.525361 4730 scope.go:117] "RemoveContainer" containerID="bb084137d5918afdeb931c3364180027f66610bb94a9b283e99cdd6522173e0b" Mar 20 16:46:12 crc kubenswrapper[4730]: E0320 16:46:12.525874 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb084137d5918afdeb931c3364180027f66610bb94a9b283e99cdd6522173e0b\": container with ID starting with bb084137d5918afdeb931c3364180027f66610bb94a9b283e99cdd6522173e0b not found: ID does not exist" containerID="bb084137d5918afdeb931c3364180027f66610bb94a9b283e99cdd6522173e0b" Mar 20 16:46:12 crc kubenswrapper[4730]: I0320 16:46:12.525921 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb084137d5918afdeb931c3364180027f66610bb94a9b283e99cdd6522173e0b"} err="failed to get container status \"bb084137d5918afdeb931c3364180027f66610bb94a9b283e99cdd6522173e0b\": rpc error: code = NotFound desc = could not find container \"bb084137d5918afdeb931c3364180027f66610bb94a9b283e99cdd6522173e0b\": container with ID starting with bb084137d5918afdeb931c3364180027f66610bb94a9b283e99cdd6522173e0b not found: ID does not exist" Mar 20 16:46:12 crc kubenswrapper[4730]: I0320 16:46:12.665460 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-64spl"] Mar 20 16:46:12 crc kubenswrapper[4730]: I0320 16:46:12.674761 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-64spl"] Mar 20 16:46:13 crc kubenswrapper[4730]: I0320 16:46:13.338105 4730 generic.go:334] "Generic (PLEG): container finished" podID="64b3980f-0b4c-4751-9152-a70ce47eca6a" containerID="ea1300f4a3d61efadbaf33ca42f5d74a7b4fece1736b7ee2f5ee762ab6d393a0" exitCode=0 Mar 20 16:46:13 crc kubenswrapper[4730]: I0320 16:46:13.338176 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgqdz" event={"ID":"64b3980f-0b4c-4751-9152-a70ce47eca6a","Type":"ContainerDied","Data":"ea1300f4a3d61efadbaf33ca42f5d74a7b4fece1736b7ee2f5ee762ab6d393a0"} Mar 20 16:46:13 crc kubenswrapper[4730]: I0320 16:46:13.550560 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c95ee43-d4bb-471d-977a-4cb14f99a03e" path="/var/lib/kubelet/pods/3c95ee43-d4bb-471d-977a-4cb14f99a03e/volumes" Mar 20 16:46:14 crc kubenswrapper[4730]: I0320 16:46:14.349135 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgqdz" event={"ID":"64b3980f-0b4c-4751-9152-a70ce47eca6a","Type":"ContainerStarted","Data":"4e18994692c56d92d7f2b3507847c3f6598d1c195897adcc3749b1a091493eb8"} Mar 20 16:46:14 crc kubenswrapper[4730]: I0320 16:46:14.374075 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qgqdz" podStartSLOduration=2.913672732 podStartE2EDuration="7.374056381s" podCreationTimestamp="2026-03-20 16:46:07 +0000 UTC" firstStartedPulling="2026-03-20 16:46:09.289686246 +0000 UTC m=+4028.503057615" lastFinishedPulling="2026-03-20 16:46:13.750069895 +0000 UTC m=+4032.963441264" observedRunningTime="2026-03-20 16:46:14.367356811 +0000 UTC m=+4033.580728180" watchObservedRunningTime="2026-03-20 16:46:14.374056381 +0000 UTC m=+4033.587427760" Mar 20 16:46:17 crc kubenswrapper[4730]: I0320 16:46:17.745003 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qgqdz" Mar 20 16:46:17 crc kubenswrapper[4730]: I0320 16:46:17.747046 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qgqdz" Mar 20 16:46:17 crc kubenswrapper[4730]: I0320 16:46:17.794149 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qgqdz" Mar 20 16:46:18 crc kubenswrapper[4730]: I0320 16:46:18.460359 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qgqdz" Mar 20 16:46:18 crc kubenswrapper[4730]: I0320 16:46:18.963852 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qgqdz"] Mar 20 16:46:20 crc kubenswrapper[4730]: I0320 16:46:20.405411 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qgqdz" podUID="64b3980f-0b4c-4751-9152-a70ce47eca6a" containerName="registry-server" containerID="cri-o://4e18994692c56d92d7f2b3507847c3f6598d1c195897adcc3749b1a091493eb8" gracePeriod=2 Mar 20 16:46:20 crc kubenswrapper[4730]: I0320 16:46:20.535344 4730 scope.go:117] "RemoveContainer" containerID="ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610" Mar 20 16:46:20 crc kubenswrapper[4730]: E0320 16:46:20.536011 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:46:21 crc kubenswrapper[4730]: I0320 16:46:21.007683 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgqdz" Mar 20 16:46:21 crc kubenswrapper[4730]: I0320 16:46:21.125634 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfng9\" (UniqueName: \"kubernetes.io/projected/64b3980f-0b4c-4751-9152-a70ce47eca6a-kube-api-access-dfng9\") pod \"64b3980f-0b4c-4751-9152-a70ce47eca6a\" (UID: \"64b3980f-0b4c-4751-9152-a70ce47eca6a\") " Mar 20 16:46:21 crc kubenswrapper[4730]: I0320 16:46:21.126157 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64b3980f-0b4c-4751-9152-a70ce47eca6a-utilities\") pod \"64b3980f-0b4c-4751-9152-a70ce47eca6a\" (UID: \"64b3980f-0b4c-4751-9152-a70ce47eca6a\") " Mar 20 16:46:21 crc kubenswrapper[4730]: I0320 16:46:21.126388 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64b3980f-0b4c-4751-9152-a70ce47eca6a-catalog-content\") pod \"64b3980f-0b4c-4751-9152-a70ce47eca6a\" (UID: \"64b3980f-0b4c-4751-9152-a70ce47eca6a\") " Mar 20 16:46:21 crc kubenswrapper[4730]: I0320 16:46:21.126881 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64b3980f-0b4c-4751-9152-a70ce47eca6a-utilities" (OuterVolumeSpecName: "utilities") pod "64b3980f-0b4c-4751-9152-a70ce47eca6a" (UID: "64b3980f-0b4c-4751-9152-a70ce47eca6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:46:21 crc kubenswrapper[4730]: I0320 16:46:21.127453 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64b3980f-0b4c-4751-9152-a70ce47eca6a-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 16:46:21 crc kubenswrapper[4730]: I0320 16:46:21.134595 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64b3980f-0b4c-4751-9152-a70ce47eca6a-kube-api-access-dfng9" (OuterVolumeSpecName: "kube-api-access-dfng9") pod "64b3980f-0b4c-4751-9152-a70ce47eca6a" (UID: "64b3980f-0b4c-4751-9152-a70ce47eca6a"). InnerVolumeSpecName "kube-api-access-dfng9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:46:21 crc kubenswrapper[4730]: I0320 16:46:21.206558 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64b3980f-0b4c-4751-9152-a70ce47eca6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64b3980f-0b4c-4751-9152-a70ce47eca6a" (UID: "64b3980f-0b4c-4751-9152-a70ce47eca6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:46:21 crc kubenswrapper[4730]: I0320 16:46:21.229691 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfng9\" (UniqueName: \"kubernetes.io/projected/64b3980f-0b4c-4751-9152-a70ce47eca6a-kube-api-access-dfng9\") on node \"crc\" DevicePath \"\"" Mar 20 16:46:21 crc kubenswrapper[4730]: I0320 16:46:21.229740 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64b3980f-0b4c-4751-9152-a70ce47eca6a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 16:46:21 crc kubenswrapper[4730]: I0320 16:46:21.415374 4730 generic.go:334] "Generic (PLEG): container finished" podID="64b3980f-0b4c-4751-9152-a70ce47eca6a" containerID="4e18994692c56d92d7f2b3507847c3f6598d1c195897adcc3749b1a091493eb8" exitCode=0 Mar 20 16:46:21 crc kubenswrapper[4730]: I0320 16:46:21.415429 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgqdz" event={"ID":"64b3980f-0b4c-4751-9152-a70ce47eca6a","Type":"ContainerDied","Data":"4e18994692c56d92d7f2b3507847c3f6598d1c195897adcc3749b1a091493eb8"} Mar 20 16:46:21 crc kubenswrapper[4730]: I0320 16:46:21.415460 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgqdz" event={"ID":"64b3980f-0b4c-4751-9152-a70ce47eca6a","Type":"ContainerDied","Data":"007c95ca65feb4b29413d4a25914d3478366cc965bcb7c6e91b5caf3907ff6e4"} Mar 20 16:46:21 crc kubenswrapper[4730]: I0320 16:46:21.415473 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgqdz" Mar 20 16:46:21 crc kubenswrapper[4730]: I0320 16:46:21.415478 4730 scope.go:117] "RemoveContainer" containerID="4e18994692c56d92d7f2b3507847c3f6598d1c195897adcc3749b1a091493eb8" Mar 20 16:46:21 crc kubenswrapper[4730]: I0320 16:46:21.440741 4730 scope.go:117] "RemoveContainer" containerID="ea1300f4a3d61efadbaf33ca42f5d74a7b4fece1736b7ee2f5ee762ab6d393a0" Mar 20 16:46:21 crc kubenswrapper[4730]: I0320 16:46:21.468335 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qgqdz"] Mar 20 16:46:21 crc kubenswrapper[4730]: I0320 16:46:21.476405 4730 scope.go:117] "RemoveContainer" containerID="086dd9b660f91623a636ce9b3aee0e5600fcc4ff44b302c02e3e1d6ab2c57e02" Mar 20 16:46:21 crc kubenswrapper[4730]: I0320 16:46:21.478192 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qgqdz"] Mar 20 16:46:21 crc kubenswrapper[4730]: I0320 16:46:21.568708 4730 scope.go:117] "RemoveContainer" containerID="4e18994692c56d92d7f2b3507847c3f6598d1c195897adcc3749b1a091493eb8" Mar 20 16:46:21 crc kubenswrapper[4730]: E0320 16:46:21.569282 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e18994692c56d92d7f2b3507847c3f6598d1c195897adcc3749b1a091493eb8\": container with ID starting with 4e18994692c56d92d7f2b3507847c3f6598d1c195897adcc3749b1a091493eb8 not found: ID does not exist" containerID="4e18994692c56d92d7f2b3507847c3f6598d1c195897adcc3749b1a091493eb8" Mar 20 16:46:21 crc kubenswrapper[4730]: I0320 16:46:21.569328 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e18994692c56d92d7f2b3507847c3f6598d1c195897adcc3749b1a091493eb8"} err="failed to get container status \"4e18994692c56d92d7f2b3507847c3f6598d1c195897adcc3749b1a091493eb8\": rpc error: code = NotFound desc = could not find container \"4e18994692c56d92d7f2b3507847c3f6598d1c195897adcc3749b1a091493eb8\": container with ID starting with 4e18994692c56d92d7f2b3507847c3f6598d1c195897adcc3749b1a091493eb8 not found: ID does not exist" Mar 20 16:46:21 crc kubenswrapper[4730]: I0320 16:46:21.569358 4730 scope.go:117] "RemoveContainer" containerID="ea1300f4a3d61efadbaf33ca42f5d74a7b4fece1736b7ee2f5ee762ab6d393a0" Mar 20 16:46:21 crc kubenswrapper[4730]: E0320 16:46:21.569648 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea1300f4a3d61efadbaf33ca42f5d74a7b4fece1736b7ee2f5ee762ab6d393a0\": container with ID starting with ea1300f4a3d61efadbaf33ca42f5d74a7b4fece1736b7ee2f5ee762ab6d393a0 not found: ID does not exist" containerID="ea1300f4a3d61efadbaf33ca42f5d74a7b4fece1736b7ee2f5ee762ab6d393a0" Mar 20 16:46:21 crc kubenswrapper[4730]: I0320 16:46:21.569719 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea1300f4a3d61efadbaf33ca42f5d74a7b4fece1736b7ee2f5ee762ab6d393a0"} err="failed to get container status \"ea1300f4a3d61efadbaf33ca42f5d74a7b4fece1736b7ee2f5ee762ab6d393a0\": rpc error: code = NotFound desc = could not find container \"ea1300f4a3d61efadbaf33ca42f5d74a7b4fece1736b7ee2f5ee762ab6d393a0\": container with ID starting with ea1300f4a3d61efadbaf33ca42f5d74a7b4fece1736b7ee2f5ee762ab6d393a0 not found: ID does not exist" Mar 20 16:46:21 crc kubenswrapper[4730]: I0320 16:46:21.569761 4730 scope.go:117] "RemoveContainer" containerID="086dd9b660f91623a636ce9b3aee0e5600fcc4ff44b302c02e3e1d6ab2c57e02" Mar 20 16:46:21 crc kubenswrapper[4730]: E0320 16:46:21.570799 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"086dd9b660f91623a636ce9b3aee0e5600fcc4ff44b302c02e3e1d6ab2c57e02\": container with ID starting with 086dd9b660f91623a636ce9b3aee0e5600fcc4ff44b302c02e3e1d6ab2c57e02 not found: ID does not exist" containerID="086dd9b660f91623a636ce9b3aee0e5600fcc4ff44b302c02e3e1d6ab2c57e02" Mar 20 16:46:21 crc kubenswrapper[4730]: I0320 16:46:21.570837 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"086dd9b660f91623a636ce9b3aee0e5600fcc4ff44b302c02e3e1d6ab2c57e02"} err="failed to get container status \"086dd9b660f91623a636ce9b3aee0e5600fcc4ff44b302c02e3e1d6ab2c57e02\": rpc error: code = NotFound desc = could not find container \"086dd9b660f91623a636ce9b3aee0e5600fcc4ff44b302c02e3e1d6ab2c57e02\": container with ID starting with 086dd9b660f91623a636ce9b3aee0e5600fcc4ff44b302c02e3e1d6ab2c57e02 not found: ID does not exist" Mar 20 16:46:21 crc kubenswrapper[4730]: I0320 16:46:21.571613 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64b3980f-0b4c-4751-9152-a70ce47eca6a" path="/var/lib/kubelet/pods/64b3980f-0b4c-4751-9152-a70ce47eca6a/volumes" Mar 20 16:46:31 crc kubenswrapper[4730]: I0320 16:46:31.546646 4730 scope.go:117] "RemoveContainer" containerID="ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610" Mar 20 16:46:31 crc kubenswrapper[4730]: E0320 16:46:31.547987 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:46:45 crc kubenswrapper[4730]: I0320 16:46:45.533522 4730 scope.go:117] "RemoveContainer" containerID="ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610" Mar 20 16:46:45 crc kubenswrapper[4730]: E0320 16:46:45.534354 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:46:51 crc kubenswrapper[4730]: I0320 16:46:51.289025 4730 scope.go:117] "RemoveContainer" containerID="86dd9cb2df6336d37948551b33d5a151e10e12f60435fec8a924c6900e110929" Mar 20 16:46:59 crc kubenswrapper[4730]: I0320 16:46:59.533888 4730 scope.go:117] "RemoveContainer" containerID="ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610" Mar 20 16:46:59 crc kubenswrapper[4730]: E0320 16:46:59.535187 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:47:13 crc kubenswrapper[4730]: I0320 16:47:13.532881 4730 scope.go:117] "RemoveContainer" containerID="ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610" Mar 20 16:47:13 crc kubenswrapper[4730]: E0320 16:47:13.533957 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:47:26 crc kubenswrapper[4730]: I0320 16:47:26.534693 4730 scope.go:117] "RemoveContainer" containerID="ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610" Mar 20 16:47:26 crc kubenswrapper[4730]: E0320 16:47:26.535942 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:47:41 crc kubenswrapper[4730]: I0320 16:47:41.539490 4730 scope.go:117] "RemoveContainer" containerID="ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610" Mar 20 16:47:41 crc kubenswrapper[4730]: E0320 16:47:41.540448 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:47:55 crc kubenswrapper[4730]: I0320 16:47:55.534070 4730 scope.go:117] "RemoveContainer" containerID="ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610" Mar 20 16:47:55 crc kubenswrapper[4730]: E0320 16:47:55.535284 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:48:00 crc kubenswrapper[4730]: I0320 16:48:00.145497 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567088-jzf6l"] Mar 20 16:48:00 crc kubenswrapper[4730]: E0320 16:48:00.146428 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64b3980f-0b4c-4751-9152-a70ce47eca6a" containerName="extract-utilities" Mar 20 16:48:00 crc kubenswrapper[4730]: I0320 16:48:00.146444 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="64b3980f-0b4c-4751-9152-a70ce47eca6a" containerName="extract-utilities" Mar 20 16:48:00 crc kubenswrapper[4730]: E0320 16:48:00.146464 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c95ee43-d4bb-471d-977a-4cb14f99a03e" containerName="registry-server" Mar 20 16:48:00 crc kubenswrapper[4730]: I0320 16:48:00.146471 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c95ee43-d4bb-471d-977a-4cb14f99a03e" containerName="registry-server" Mar 20 16:48:00 crc kubenswrapper[4730]: E0320 16:48:00.146481 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64b3980f-0b4c-4751-9152-a70ce47eca6a" containerName="extract-content" Mar 20 16:48:00 crc kubenswrapper[4730]: I0320 16:48:00.146487 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="64b3980f-0b4c-4751-9152-a70ce47eca6a" containerName="extract-content" Mar 20 16:48:00 crc kubenswrapper[4730]: E0320 16:48:00.146501 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c95ee43-d4bb-471d-977a-4cb14f99a03e" containerName="extract-utilities" Mar 20 16:48:00 crc kubenswrapper[4730]: I0320 16:48:00.146506 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c95ee43-d4bb-471d-977a-4cb14f99a03e" containerName="extract-utilities" Mar 20 16:48:00 crc kubenswrapper[4730]: E0320 16:48:00.146514 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c95ee43-d4bb-471d-977a-4cb14f99a03e" containerName="extract-content" Mar 20 16:48:00 crc kubenswrapper[4730]: I0320 16:48:00.146519 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c95ee43-d4bb-471d-977a-4cb14f99a03e" containerName="extract-content" Mar 20 16:48:00 crc kubenswrapper[4730]: E0320 16:48:00.146536 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64b3980f-0b4c-4751-9152-a70ce47eca6a" containerName="registry-server" Mar 20 16:48:00 crc kubenswrapper[4730]: I0320 16:48:00.146542 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="64b3980f-0b4c-4751-9152-a70ce47eca6a" containerName="registry-server" Mar 20 16:48:00 crc kubenswrapper[4730]: I0320 16:48:00.146718 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c95ee43-d4bb-471d-977a-4cb14f99a03e" containerName="registry-server" Mar 20 16:48:00 crc kubenswrapper[4730]: I0320 16:48:00.146748 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="64b3980f-0b4c-4751-9152-a70ce47eca6a" containerName="registry-server" Mar 20 16:48:00 crc kubenswrapper[4730]: I0320 16:48:00.147459 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567088-jzf6l" Mar 20 16:48:00 crc kubenswrapper[4730]: I0320 16:48:00.149229 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:48:00 crc kubenswrapper[4730]: I0320 16:48:00.149365 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:48:00 crc kubenswrapper[4730]: I0320 16:48:00.150943 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:48:00 crc kubenswrapper[4730]: I0320 16:48:00.163558 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567088-jzf6l"] Mar 20 16:48:00 crc kubenswrapper[4730]: I0320 16:48:00.264290 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcf6j\" (UniqueName: \"kubernetes.io/projected/423d9abb-9507-4e8e-aa00-42b3d34328ed-kube-api-access-jcf6j\") pod \"auto-csr-approver-29567088-jzf6l\" (UID: \"423d9abb-9507-4e8e-aa00-42b3d34328ed\") " pod="openshift-infra/auto-csr-approver-29567088-jzf6l" Mar 20 16:48:00 crc kubenswrapper[4730]: I0320 16:48:00.366897 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcf6j\" (UniqueName: \"kubernetes.io/projected/423d9abb-9507-4e8e-aa00-42b3d34328ed-kube-api-access-jcf6j\") pod \"auto-csr-approver-29567088-jzf6l\" (UID: \"423d9abb-9507-4e8e-aa00-42b3d34328ed\") " pod="openshift-infra/auto-csr-approver-29567088-jzf6l" Mar 20 16:48:00 crc kubenswrapper[4730]: I0320 16:48:00.399015 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcf6j\" (UniqueName: \"kubernetes.io/projected/423d9abb-9507-4e8e-aa00-42b3d34328ed-kube-api-access-jcf6j\") pod \"auto-csr-approver-29567088-jzf6l\" (UID: \"423d9abb-9507-4e8e-aa00-42b3d34328ed\") " pod="openshift-infra/auto-csr-approver-29567088-jzf6l" Mar 20 16:48:00 crc kubenswrapper[4730]: I0320 16:48:00.470179 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567088-jzf6l" Mar 20 16:48:00 crc kubenswrapper[4730]: I0320 16:48:00.974122 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567088-jzf6l"] Mar 20 16:48:01 crc kubenswrapper[4730]: I0320 16:48:01.529744 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567088-jzf6l" event={"ID":"423d9abb-9507-4e8e-aa00-42b3d34328ed","Type":"ContainerStarted","Data":"1c6704cd2d75950e01dbade980fd9ad91866b5accc9f9b477ddc38c55b304a3e"} Mar 20 16:48:03 crc kubenswrapper[4730]: I0320 16:48:03.547792 4730 generic.go:334] "Generic (PLEG): container finished" podID="423d9abb-9507-4e8e-aa00-42b3d34328ed" containerID="7b312e45c65bdab74120878c5bbe6f1323de4c86f0295b5909d9931e6d7a0af0" exitCode=0 Mar 20 16:48:03 crc kubenswrapper[4730]: I0320 16:48:03.547864 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567088-jzf6l" event={"ID":"423d9abb-9507-4e8e-aa00-42b3d34328ed","Type":"ContainerDied","Data":"7b312e45c65bdab74120878c5bbe6f1323de4c86f0295b5909d9931e6d7a0af0"} Mar 20 16:48:05 crc kubenswrapper[4730]: I0320 16:48:05.001298 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567088-jzf6l" Mar 20 16:48:05 crc kubenswrapper[4730]: I0320 16:48:05.184145 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcf6j\" (UniqueName: \"kubernetes.io/projected/423d9abb-9507-4e8e-aa00-42b3d34328ed-kube-api-access-jcf6j\") pod \"423d9abb-9507-4e8e-aa00-42b3d34328ed\" (UID: \"423d9abb-9507-4e8e-aa00-42b3d34328ed\") " Mar 20 16:48:05 crc kubenswrapper[4730]: I0320 16:48:05.190477 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/423d9abb-9507-4e8e-aa00-42b3d34328ed-kube-api-access-jcf6j" (OuterVolumeSpecName: "kube-api-access-jcf6j") pod "423d9abb-9507-4e8e-aa00-42b3d34328ed" (UID: "423d9abb-9507-4e8e-aa00-42b3d34328ed"). InnerVolumeSpecName "kube-api-access-jcf6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:48:05 crc kubenswrapper[4730]: I0320 16:48:05.286783 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcf6j\" (UniqueName: \"kubernetes.io/projected/423d9abb-9507-4e8e-aa00-42b3d34328ed-kube-api-access-jcf6j\") on node \"crc\" DevicePath \"\"" Mar 20 16:48:05 crc kubenswrapper[4730]: I0320 16:48:05.580618 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567088-jzf6l" event={"ID":"423d9abb-9507-4e8e-aa00-42b3d34328ed","Type":"ContainerDied","Data":"1c6704cd2d75950e01dbade980fd9ad91866b5accc9f9b477ddc38c55b304a3e"} Mar 20 16:48:05 crc kubenswrapper[4730]: I0320 16:48:05.580671 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c6704cd2d75950e01dbade980fd9ad91866b5accc9f9b477ddc38c55b304a3e" Mar 20 16:48:05 crc kubenswrapper[4730]: I0320 16:48:05.580761 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567088-jzf6l" Mar 20 16:48:06 crc kubenswrapper[4730]: I0320 16:48:06.078356 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567082-597m4"] Mar 20 16:48:06 crc kubenswrapper[4730]: I0320 16:48:06.090846 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567082-597m4"] Mar 20 16:48:07 crc kubenswrapper[4730]: I0320 16:48:07.546936 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="713df8c0-cae2-4cfd-9ecf-66856a78c066" path="/var/lib/kubelet/pods/713df8c0-cae2-4cfd-9ecf-66856a78c066/volumes" Mar 20 16:48:08 crc kubenswrapper[4730]: I0320 16:48:08.533723 4730 scope.go:117] "RemoveContainer" containerID="ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610" Mar 20 16:48:08 crc kubenswrapper[4730]: E0320 16:48:08.534826 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:48:17 crc kubenswrapper[4730]: I0320 16:48:17.272127 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-7c5c8ffdd9-xpfhf" podUID="b9780622-27f3-4339-8107-321feed5e25b" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Mar 20 16:48:23 crc kubenswrapper[4730]: I0320 16:48:23.533386 4730 scope.go:117] "RemoveContainer" containerID="ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610" Mar 20 16:48:23 crc kubenswrapper[4730]: E0320 16:48:23.534353 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:48:35 crc kubenswrapper[4730]: I0320 16:48:35.532709 4730 scope.go:117] "RemoveContainer" containerID="ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610" Mar 20 16:48:35 crc kubenswrapper[4730]: E0320 16:48:35.533512 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:48:47 crc kubenswrapper[4730]: I0320 16:48:47.200391 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l6l4h"] Mar 20 16:48:47 crc kubenswrapper[4730]: E0320 16:48:47.201591 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="423d9abb-9507-4e8e-aa00-42b3d34328ed" containerName="oc" Mar 20 16:48:47 crc kubenswrapper[4730]: I0320 16:48:47.201612 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="423d9abb-9507-4e8e-aa00-42b3d34328ed" containerName="oc" Mar 20 16:48:47 crc kubenswrapper[4730]: I0320 16:48:47.201865 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="423d9abb-9507-4e8e-aa00-42b3d34328ed" containerName="oc" Mar 20 16:48:47 crc kubenswrapper[4730]: I0320 16:48:47.203624 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l6l4h" Mar 20 16:48:47 crc kubenswrapper[4730]: I0320 16:48:47.258148 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l6l4h"] Mar 20 16:48:47 crc kubenswrapper[4730]: I0320 16:48:47.369511 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68gqd\" (UniqueName: \"kubernetes.io/projected/e6a0115e-9fa1-4809-adb5-76be4e66cd52-kube-api-access-68gqd\") pod \"redhat-operators-l6l4h\" (UID: \"e6a0115e-9fa1-4809-adb5-76be4e66cd52\") " pod="openshift-marketplace/redhat-operators-l6l4h" Mar 20 16:48:47 crc kubenswrapper[4730]: I0320 16:48:47.369961 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6a0115e-9fa1-4809-adb5-76be4e66cd52-utilities\") pod \"redhat-operators-l6l4h\" (UID: \"e6a0115e-9fa1-4809-adb5-76be4e66cd52\") " pod="openshift-marketplace/redhat-operators-l6l4h" Mar 20 16:48:47 crc kubenswrapper[4730]: I0320 16:48:47.370082 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6a0115e-9fa1-4809-adb5-76be4e66cd52-catalog-content\") pod \"redhat-operators-l6l4h\" (UID: \"e6a0115e-9fa1-4809-adb5-76be4e66cd52\") " pod="openshift-marketplace/redhat-operators-l6l4h" Mar 20 16:48:47 crc kubenswrapper[4730]: I0320 16:48:47.472049 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6a0115e-9fa1-4809-adb5-76be4e66cd52-catalog-content\") pod \"redhat-operators-l6l4h\" (UID: \"e6a0115e-9fa1-4809-adb5-76be4e66cd52\") " pod="openshift-marketplace/redhat-operators-l6l4h" Mar 20 16:48:47 crc kubenswrapper[4730]: I0320 16:48:47.472170 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68gqd\" (UniqueName: \"kubernetes.io/projected/e6a0115e-9fa1-4809-adb5-76be4e66cd52-kube-api-access-68gqd\") pod \"redhat-operators-l6l4h\" (UID: \"e6a0115e-9fa1-4809-adb5-76be4e66cd52\") " pod="openshift-marketplace/redhat-operators-l6l4h" Mar 20 16:48:47 crc kubenswrapper[4730]: I0320 16:48:47.472261 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6a0115e-9fa1-4809-adb5-76be4e66cd52-utilities\") pod \"redhat-operators-l6l4h\" (UID: \"e6a0115e-9fa1-4809-adb5-76be4e66cd52\") " pod="openshift-marketplace/redhat-operators-l6l4h" Mar 20 16:48:47 crc kubenswrapper[4730]: I0320 16:48:47.472718 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6a0115e-9fa1-4809-adb5-76be4e66cd52-utilities\") pod \"redhat-operators-l6l4h\" (UID: \"e6a0115e-9fa1-4809-adb5-76be4e66cd52\") " pod="openshift-marketplace/redhat-operators-l6l4h" Mar 20 16:48:47 crc kubenswrapper[4730]: I0320 16:48:47.472762 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6a0115e-9fa1-4809-adb5-76be4e66cd52-catalog-content\") pod \"redhat-operators-l6l4h\" (UID: \"e6a0115e-9fa1-4809-adb5-76be4e66cd52\") " pod="openshift-marketplace/redhat-operators-l6l4h" Mar 20 16:48:47 crc kubenswrapper[4730]: I0320 16:48:47.490403 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68gqd\" (UniqueName: \"kubernetes.io/projected/e6a0115e-9fa1-4809-adb5-76be4e66cd52-kube-api-access-68gqd\") pod \"redhat-operators-l6l4h\" (UID: \"e6a0115e-9fa1-4809-adb5-76be4e66cd52\") " pod="openshift-marketplace/redhat-operators-l6l4h" Mar 20 16:48:47 crc kubenswrapper[4730]: I0320 16:48:47.591745 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l6l4h" Mar 20 16:48:47 crc kubenswrapper[4730]: I0320 16:48:47.797337 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z7gbm"] Mar 20 16:48:47 crc kubenswrapper[4730]: I0320 16:48:47.799668 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7gbm" Mar 20 16:48:47 crc kubenswrapper[4730]: I0320 16:48:47.817318 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z7gbm"] Mar 20 16:48:47 crc kubenswrapper[4730]: I0320 16:48:47.880115 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9529743d-7cc8-4356-a60a-efb0c0657fc6-catalog-content\") pod \"certified-operators-z7gbm\" (UID: \"9529743d-7cc8-4356-a60a-efb0c0657fc6\") " pod="openshift-marketplace/certified-operators-z7gbm" Mar 20 16:48:47 crc kubenswrapper[4730]: I0320 16:48:47.880179 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9529743d-7cc8-4356-a60a-efb0c0657fc6-utilities\") pod \"certified-operators-z7gbm\" (UID: \"9529743d-7cc8-4356-a60a-efb0c0657fc6\") " pod="openshift-marketplace/certified-operators-z7gbm" Mar 20 16:48:47 crc kubenswrapper[4730]: I0320 16:48:47.880207 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfnsd\" (UniqueName: \"kubernetes.io/projected/9529743d-7cc8-4356-a60a-efb0c0657fc6-kube-api-access-kfnsd\") pod \"certified-operators-z7gbm\" (UID: \"9529743d-7cc8-4356-a60a-efb0c0657fc6\") " pod="openshift-marketplace/certified-operators-z7gbm" Mar 20 16:48:47 crc kubenswrapper[4730]: I0320 16:48:47.982366 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9529743d-7cc8-4356-a60a-efb0c0657fc6-catalog-content\") pod \"certified-operators-z7gbm\" (UID: \"9529743d-7cc8-4356-a60a-efb0c0657fc6\") " pod="openshift-marketplace/certified-operators-z7gbm" Mar 20 16:48:47 crc kubenswrapper[4730]: I0320 16:48:47.982434 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9529743d-7cc8-4356-a60a-efb0c0657fc6-utilities\") pod \"certified-operators-z7gbm\" (UID: \"9529743d-7cc8-4356-a60a-efb0c0657fc6\") " pod="openshift-marketplace/certified-operators-z7gbm" Mar 20 16:48:47 crc kubenswrapper[4730]: I0320 16:48:47.982464 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfnsd\" (UniqueName: \"kubernetes.io/projected/9529743d-7cc8-4356-a60a-efb0c0657fc6-kube-api-access-kfnsd\") pod \"certified-operators-z7gbm\" (UID: \"9529743d-7cc8-4356-a60a-efb0c0657fc6\") " pod="openshift-marketplace/certified-operators-z7gbm" Mar 20 16:48:47 crc kubenswrapper[4730]: I0320 16:48:47.982992 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9529743d-7cc8-4356-a60a-efb0c0657fc6-catalog-content\") pod \"certified-operators-z7gbm\" (UID: \"9529743d-7cc8-4356-a60a-efb0c0657fc6\") " pod="openshift-marketplace/certified-operators-z7gbm" Mar 20 16:48:47 crc kubenswrapper[4730]: I0320 16:48:47.982992 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9529743d-7cc8-4356-a60a-efb0c0657fc6-utilities\") pod \"certified-operators-z7gbm\" (UID: \"9529743d-7cc8-4356-a60a-efb0c0657fc6\") " pod="openshift-marketplace/certified-operators-z7gbm" Mar 20 16:48:48 crc kubenswrapper[4730]: I0320 16:48:48.003166 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfnsd\" (UniqueName: \"kubernetes.io/projected/9529743d-7cc8-4356-a60a-efb0c0657fc6-kube-api-access-kfnsd\") pod \"certified-operators-z7gbm\" (UID: \"9529743d-7cc8-4356-a60a-efb0c0657fc6\") " pod="openshift-marketplace/certified-operators-z7gbm" Mar 20 16:48:48 crc kubenswrapper[4730]: I0320 16:48:48.065595 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l6l4h"] Mar 20 16:48:48 crc kubenswrapper[4730]: I0320 16:48:48.126394 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7gbm" Mar 20 16:48:48 crc kubenswrapper[4730]: I0320 16:48:48.635729 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z7gbm"] Mar 20 16:48:49 crc kubenswrapper[4730]: I0320 16:48:49.044854 4730 generic.go:334] "Generic (PLEG): container finished" podID="e6a0115e-9fa1-4809-adb5-76be4e66cd52" containerID="9cafb991c7ca35e5ba77af18a045990efcbfc9d1be49bb90913f2673a3398384" exitCode=0 Mar 20 16:48:49 crc kubenswrapper[4730]: I0320 16:48:49.044953 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6l4h" event={"ID":"e6a0115e-9fa1-4809-adb5-76be4e66cd52","Type":"ContainerDied","Data":"9cafb991c7ca35e5ba77af18a045990efcbfc9d1be49bb90913f2673a3398384"} Mar 20 16:48:49 crc kubenswrapper[4730]: I0320 16:48:49.045346 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6l4h" event={"ID":"e6a0115e-9fa1-4809-adb5-76be4e66cd52","Type":"ContainerStarted","Data":"d2ca20484b8b40d7c6540ecf524ef385e1e84275942650c4814c8b0e6d8e9e71"} Mar 20 16:48:49 crc kubenswrapper[4730]: I0320 16:48:49.047979 4730 generic.go:334] "Generic (PLEG): container finished" podID="9529743d-7cc8-4356-a60a-efb0c0657fc6" containerID="d65dd161a59341b12a4cdf8839f51c7e61e875c1a19f37eba22280b7b377868a" exitCode=0 Mar 20 16:48:49 crc kubenswrapper[4730]: I0320 16:48:49.048014 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7gbm" event={"ID":"9529743d-7cc8-4356-a60a-efb0c0657fc6","Type":"ContainerDied","Data":"d65dd161a59341b12a4cdf8839f51c7e61e875c1a19f37eba22280b7b377868a"} Mar 20 16:48:49 crc kubenswrapper[4730]: I0320 16:48:49.048042 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7gbm" event={"ID":"9529743d-7cc8-4356-a60a-efb0c0657fc6","Type":"ContainerStarted","Data":"3d1bafb556879d48fd1c609a49ffd281b803156d4929627c53c4cf726a9391f4"} Mar 20 16:48:50 crc kubenswrapper[4730]: I0320 16:48:50.057043 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6l4h" event={"ID":"e6a0115e-9fa1-4809-adb5-76be4e66cd52","Type":"ContainerStarted","Data":"74f6aec5b0a2a2981ace72e9b9444851318412a96c97f48bbc92f4581cf79a47"} Mar 20 16:48:50 crc kubenswrapper[4730]: I0320 16:48:50.059978 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7gbm" event={"ID":"9529743d-7cc8-4356-a60a-efb0c0657fc6","Type":"ContainerStarted","Data":"dc80adcd98dab5030f9344d7fc4b434975389290e0aacc050a76cea518392d8e"} Mar 20 16:48:50 crc kubenswrapper[4730]: I0320 16:48:50.533690 4730 scope.go:117] "RemoveContainer" containerID="ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610" Mar 20 16:48:50 crc kubenswrapper[4730]: E0320 16:48:50.534009 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:48:51 crc kubenswrapper[4730]: I0320 16:48:51.415472 4730 scope.go:117] "RemoveContainer" containerID="44f54f3fc7434586ebe0f8d3b305da77181f85e1a148a72f852acf4f69b33aae" Mar 20 16:48:52 crc kubenswrapper[4730]: I0320 16:48:52.084417 4730 generic.go:334] "Generic (PLEG): container finished" podID="9529743d-7cc8-4356-a60a-efb0c0657fc6" containerID="dc80adcd98dab5030f9344d7fc4b434975389290e0aacc050a76cea518392d8e" exitCode=0 Mar 20 16:48:52 crc kubenswrapper[4730]: I0320 16:48:52.084465 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7gbm" event={"ID":"9529743d-7cc8-4356-a60a-efb0c0657fc6","Type":"ContainerDied","Data":"dc80adcd98dab5030f9344d7fc4b434975389290e0aacc050a76cea518392d8e"} Mar 20 16:48:53 crc kubenswrapper[4730]: I0320 16:48:53.095582 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7gbm" event={"ID":"9529743d-7cc8-4356-a60a-efb0c0657fc6","Type":"ContainerStarted","Data":"338ae4a95e896fe7a2224e63f3555e3b661a6774b8e06328bbe2fece926f00df"} Mar 20 16:48:53 crc kubenswrapper[4730]: I0320 16:48:53.118383 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z7gbm" podStartSLOduration=2.358345201 podStartE2EDuration="6.118357394s" podCreationTimestamp="2026-03-20 16:48:47 +0000 UTC" firstStartedPulling="2026-03-20 16:48:49.049579673 +0000 UTC m=+4188.262951042" lastFinishedPulling="2026-03-20 16:48:52.809591866 +0000 UTC m=+4192.022963235" observedRunningTime="2026-03-20 16:48:53.112282281 +0000 UTC m=+4192.325653670" watchObservedRunningTime="2026-03-20 16:48:53.118357394 +0000 UTC m=+4192.331728763" Mar 20 16:48:55 crc kubenswrapper[4730]: I0320 16:48:55.114962 4730 generic.go:334] "Generic (PLEG): container finished" podID="e6a0115e-9fa1-4809-adb5-76be4e66cd52" containerID="74f6aec5b0a2a2981ace72e9b9444851318412a96c97f48bbc92f4581cf79a47" exitCode=0 Mar 20 16:48:55 crc kubenswrapper[4730]: I0320 16:48:55.115322 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6l4h" event={"ID":"e6a0115e-9fa1-4809-adb5-76be4e66cd52","Type":"ContainerDied","Data":"74f6aec5b0a2a2981ace72e9b9444851318412a96c97f48bbc92f4581cf79a47"} Mar 20 16:48:56 crc kubenswrapper[4730]: I0320 16:48:56.129214 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6l4h" event={"ID":"e6a0115e-9fa1-4809-adb5-76be4e66cd52","Type":"ContainerStarted","Data":"171abedfb7066a417cee9f0d2c0b610b90402d3dd8de8f88927c40fb5e73a8d3"} Mar 20 16:48:56 crc kubenswrapper[4730]: I0320 16:48:56.160999 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l6l4h" podStartSLOduration=2.47031257 podStartE2EDuration="9.160980956s" podCreationTimestamp="2026-03-20 16:48:47 +0000 UTC" firstStartedPulling="2026-03-20 16:48:49.047492553 +0000 UTC m=+4188.260863922" lastFinishedPulling="2026-03-20 16:48:55.738160939 +0000 UTC m=+4194.951532308" observedRunningTime="2026-03-20 16:48:56.150768906 +0000 UTC m=+4195.364140295" watchObservedRunningTime="2026-03-20 16:48:56.160980956 +0000 UTC m=+4195.374352315" Mar 20 16:48:57 crc kubenswrapper[4730]: I0320 16:48:57.592280 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l6l4h" Mar 20 16:48:57 crc kubenswrapper[4730]: I0320 16:48:57.593795 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l6l4h" Mar 20 16:48:58 crc kubenswrapper[4730]: I0320 16:48:58.127022 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z7gbm" Mar 20 16:48:58 crc kubenswrapper[4730]: I0320 16:48:58.127081 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z7gbm" Mar 20 16:48:58 crc kubenswrapper[4730]: I0320 16:48:58.637929 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l6l4h" podUID="e6a0115e-9fa1-4809-adb5-76be4e66cd52" containerName="registry-server" probeResult="failure" output=< Mar 20 16:48:58 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Mar 20 16:48:58 crc kubenswrapper[4730]: > Mar 20 16:48:59 crc kubenswrapper[4730]: I0320 16:48:59.172419 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-z7gbm" podUID="9529743d-7cc8-4356-a60a-efb0c0657fc6" containerName="registry-server" probeResult="failure" output=< Mar 20 16:48:59 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Mar 20 16:48:59 crc kubenswrapper[4730]: > Mar 20 16:49:04 crc kubenswrapper[4730]: I0320 16:49:04.534731 4730 scope.go:117] "RemoveContainer" containerID="ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610" Mar 20 16:49:04 crc kubenswrapper[4730]: E0320 16:49:04.535316 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:49:08 crc kubenswrapper[4730]: I0320 16:49:08.207443 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z7gbm" Mar 20 16:49:08 crc kubenswrapper[4730]: I0320 16:49:08.271831 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z7gbm" Mar 20 16:49:08 crc kubenswrapper[4730]: I0320 16:49:08.451172 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z7gbm"] Mar 20 16:49:09 crc kubenswrapper[4730]: I0320 16:49:09.045637 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l6l4h" podUID="e6a0115e-9fa1-4809-adb5-76be4e66cd52" containerName="registry-server" probeResult="failure" output=< Mar 20 16:49:09 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Mar 20 16:49:09 crc kubenswrapper[4730]: > Mar 20 16:49:09 crc kubenswrapper[4730]: I0320 16:49:09.268694 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z7gbm" podUID="9529743d-7cc8-4356-a60a-efb0c0657fc6" containerName="registry-server" containerID="cri-o://338ae4a95e896fe7a2224e63f3555e3b661a6774b8e06328bbe2fece926f00df" gracePeriod=2 Mar 20 16:49:10 crc kubenswrapper[4730]: I0320 16:49:10.286400 4730 generic.go:334] "Generic (PLEG): container finished" podID="9529743d-7cc8-4356-a60a-efb0c0657fc6" containerID="338ae4a95e896fe7a2224e63f3555e3b661a6774b8e06328bbe2fece926f00df" exitCode=0 Mar 20 16:49:10 crc kubenswrapper[4730]: I0320 16:49:10.286741 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7gbm" event={"ID":"9529743d-7cc8-4356-a60a-efb0c0657fc6","Type":"ContainerDied","Data":"338ae4a95e896fe7a2224e63f3555e3b661a6774b8e06328bbe2fece926f00df"} Mar 20 16:49:10 crc kubenswrapper[4730]: I0320 16:49:10.468361 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7gbm" Mar 20 16:49:10 crc kubenswrapper[4730]: I0320 16:49:10.662169 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9529743d-7cc8-4356-a60a-efb0c0657fc6-utilities\") pod \"9529743d-7cc8-4356-a60a-efb0c0657fc6\" (UID: \"9529743d-7cc8-4356-a60a-efb0c0657fc6\") " Mar 20 16:49:10 crc kubenswrapper[4730]: I0320 16:49:10.662479 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfnsd\" (UniqueName: \"kubernetes.io/projected/9529743d-7cc8-4356-a60a-efb0c0657fc6-kube-api-access-kfnsd\") pod \"9529743d-7cc8-4356-a60a-efb0c0657fc6\" (UID: \"9529743d-7cc8-4356-a60a-efb0c0657fc6\") " Mar 20 16:49:10 crc kubenswrapper[4730]: I0320 16:49:10.662545 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9529743d-7cc8-4356-a60a-efb0c0657fc6-catalog-content\") pod \"9529743d-7cc8-4356-a60a-efb0c0657fc6\" (UID: \"9529743d-7cc8-4356-a60a-efb0c0657fc6\") " Mar 20 16:49:10 crc kubenswrapper[4730]: I0320 16:49:10.663194 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9529743d-7cc8-4356-a60a-efb0c0657fc6-utilities" (OuterVolumeSpecName: "utilities") pod "9529743d-7cc8-4356-a60a-efb0c0657fc6" (UID: "9529743d-7cc8-4356-a60a-efb0c0657fc6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:49:10 crc kubenswrapper[4730]: I0320 16:49:10.668502 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9529743d-7cc8-4356-a60a-efb0c0657fc6-kube-api-access-kfnsd" (OuterVolumeSpecName: "kube-api-access-kfnsd") pod "9529743d-7cc8-4356-a60a-efb0c0657fc6" (UID: "9529743d-7cc8-4356-a60a-efb0c0657fc6"). InnerVolumeSpecName "kube-api-access-kfnsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:49:10 crc kubenswrapper[4730]: I0320 16:49:10.726212 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9529743d-7cc8-4356-a60a-efb0c0657fc6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9529743d-7cc8-4356-a60a-efb0c0657fc6" (UID: "9529743d-7cc8-4356-a60a-efb0c0657fc6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:49:10 crc kubenswrapper[4730]: I0320 16:49:10.765642 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfnsd\" (UniqueName: \"kubernetes.io/projected/9529743d-7cc8-4356-a60a-efb0c0657fc6-kube-api-access-kfnsd\") on node \"crc\" DevicePath \"\"" Mar 20 16:49:10 crc kubenswrapper[4730]: I0320 16:49:10.765675 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9529743d-7cc8-4356-a60a-efb0c0657fc6-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 16:49:10 crc kubenswrapper[4730]: I0320 16:49:10.765684 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9529743d-7cc8-4356-a60a-efb0c0657fc6-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 16:49:11 crc kubenswrapper[4730]: I0320 16:49:11.298195 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7gbm" event={"ID":"9529743d-7cc8-4356-a60a-efb0c0657fc6","Type":"ContainerDied","Data":"3d1bafb556879d48fd1c609a49ffd281b803156d4929627c53c4cf726a9391f4"} Mar 20 16:49:11 crc kubenswrapper[4730]: I0320 16:49:11.298546 4730 scope.go:117] "RemoveContainer" containerID="338ae4a95e896fe7a2224e63f3555e3b661a6774b8e06328bbe2fece926f00df" Mar 20 16:49:11 crc kubenswrapper[4730]: I0320 16:49:11.298399 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7gbm" Mar 20 16:49:11 crc kubenswrapper[4730]: I0320 16:49:11.320837 4730 scope.go:117] "RemoveContainer" containerID="dc80adcd98dab5030f9344d7fc4b434975389290e0aacc050a76cea518392d8e" Mar 20 16:49:11 crc kubenswrapper[4730]: I0320 16:49:11.344972 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z7gbm"] Mar 20 16:49:11 crc kubenswrapper[4730]: I0320 16:49:11.353781 4730 scope.go:117] "RemoveContainer" containerID="d65dd161a59341b12a4cdf8839f51c7e61e875c1a19f37eba22280b7b377868a" Mar 20 16:49:11 crc kubenswrapper[4730]: I0320 16:49:11.358388 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z7gbm"] Mar 20 16:49:11 crc kubenswrapper[4730]: I0320 16:49:11.560776 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9529743d-7cc8-4356-a60a-efb0c0657fc6" path="/var/lib/kubelet/pods/9529743d-7cc8-4356-a60a-efb0c0657fc6/volumes" Mar 20 16:49:18 crc kubenswrapper[4730]: I0320 16:49:18.635198 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l6l4h" podUID="e6a0115e-9fa1-4809-adb5-76be4e66cd52" containerName="registry-server" probeResult="failure" output=< Mar 20 16:49:18 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Mar 20 16:49:18 crc kubenswrapper[4730]: > Mar 20 16:49:19 crc kubenswrapper[4730]: I0320 16:49:19.538289 4730 scope.go:117] "RemoveContainer" containerID="ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610" Mar 20 16:49:19 crc kubenswrapper[4730]: E0320 16:49:19.538588 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:49:27 crc kubenswrapper[4730]: I0320 16:49:27.658421 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l6l4h" Mar 20 16:49:27 crc kubenswrapper[4730]: I0320 16:49:27.736763 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l6l4h" Mar 20 16:49:27 crc kubenswrapper[4730]: I0320 16:49:27.909186 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l6l4h"] Mar 20 16:49:29 crc kubenswrapper[4730]: I0320 16:49:29.492690 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l6l4h" podUID="e6a0115e-9fa1-4809-adb5-76be4e66cd52" containerName="registry-server" containerID="cri-o://171abedfb7066a417cee9f0d2c0b610b90402d3dd8de8f88927c40fb5e73a8d3" gracePeriod=2 Mar 20 16:49:30 crc kubenswrapper[4730]: I0320 16:49:30.448565 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l6l4h" Mar 20 16:49:30 crc kubenswrapper[4730]: I0320 16:49:30.506190 4730 generic.go:334] "Generic (PLEG): container finished" podID="e6a0115e-9fa1-4809-adb5-76be4e66cd52" containerID="171abedfb7066a417cee9f0d2c0b610b90402d3dd8de8f88927c40fb5e73a8d3" exitCode=0 Mar 20 16:49:30 crc kubenswrapper[4730]: I0320 16:49:30.506236 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6l4h" event={"ID":"e6a0115e-9fa1-4809-adb5-76be4e66cd52","Type":"ContainerDied","Data":"171abedfb7066a417cee9f0d2c0b610b90402d3dd8de8f88927c40fb5e73a8d3"} Mar 20 16:49:30 crc kubenswrapper[4730]: I0320 16:49:30.506277 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6l4h" event={"ID":"e6a0115e-9fa1-4809-adb5-76be4e66cd52","Type":"ContainerDied","Data":"d2ca20484b8b40d7c6540ecf524ef385e1e84275942650c4814c8b0e6d8e9e71"} Mar 20 16:49:30 crc kubenswrapper[4730]: I0320 16:49:30.506299 4730 scope.go:117] "RemoveContainer" containerID="171abedfb7066a417cee9f0d2c0b610b90402d3dd8de8f88927c40fb5e73a8d3" Mar 20 16:49:30 crc kubenswrapper[4730]: I0320 16:49:30.506383 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l6l4h" Mar 20 16:49:30 crc kubenswrapper[4730]: I0320 16:49:30.531447 4730 scope.go:117] "RemoveContainer" containerID="74f6aec5b0a2a2981ace72e9b9444851318412a96c97f48bbc92f4581cf79a47" Mar 20 16:49:30 crc kubenswrapper[4730]: I0320 16:49:30.564671 4730 scope.go:117] "RemoveContainer" containerID="9cafb991c7ca35e5ba77af18a045990efcbfc9d1be49bb90913f2673a3398384" Mar 20 16:49:30 crc kubenswrapper[4730]: I0320 16:49:30.623278 4730 scope.go:117] "RemoveContainer" containerID="171abedfb7066a417cee9f0d2c0b610b90402d3dd8de8f88927c40fb5e73a8d3" Mar 20 16:49:30 crc kubenswrapper[4730]: E0320 16:49:30.623803 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"171abedfb7066a417cee9f0d2c0b610b90402d3dd8de8f88927c40fb5e73a8d3\": container with ID starting with 171abedfb7066a417cee9f0d2c0b610b90402d3dd8de8f88927c40fb5e73a8d3 not found: ID does not exist" containerID="171abedfb7066a417cee9f0d2c0b610b90402d3dd8de8f88927c40fb5e73a8d3" Mar 20 16:49:30 crc kubenswrapper[4730]: I0320 16:49:30.623845 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"171abedfb7066a417cee9f0d2c0b610b90402d3dd8de8f88927c40fb5e73a8d3"} err="failed to get container status \"171abedfb7066a417cee9f0d2c0b610b90402d3dd8de8f88927c40fb5e73a8d3\": rpc error: code = NotFound desc = could not find container \"171abedfb7066a417cee9f0d2c0b610b90402d3dd8de8f88927c40fb5e73a8d3\": container with ID starting with 171abedfb7066a417cee9f0d2c0b610b90402d3dd8de8f88927c40fb5e73a8d3 not found: ID does not exist" Mar 20 16:49:30 crc kubenswrapper[4730]: I0320 16:49:30.623873 4730 scope.go:117] "RemoveContainer" containerID="74f6aec5b0a2a2981ace72e9b9444851318412a96c97f48bbc92f4581cf79a47" Mar 20 16:49:30 crc kubenswrapper[4730]: E0320 16:49:30.624341 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74f6aec5b0a2a2981ace72e9b9444851318412a96c97f48bbc92f4581cf79a47\": container with ID starting with 74f6aec5b0a2a2981ace72e9b9444851318412a96c97f48bbc92f4581cf79a47 not found: ID does not exist" containerID="74f6aec5b0a2a2981ace72e9b9444851318412a96c97f48bbc92f4581cf79a47" Mar 20 16:49:30 crc kubenswrapper[4730]: I0320 16:49:30.624404 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74f6aec5b0a2a2981ace72e9b9444851318412a96c97f48bbc92f4581cf79a47"} err="failed to get container status \"74f6aec5b0a2a2981ace72e9b9444851318412a96c97f48bbc92f4581cf79a47\": rpc error: code = NotFound desc = could not find container \"74f6aec5b0a2a2981ace72e9b9444851318412a96c97f48bbc92f4581cf79a47\": container with ID starting with 74f6aec5b0a2a2981ace72e9b9444851318412a96c97f48bbc92f4581cf79a47 not found: ID does not exist" Mar 20 16:49:30 crc kubenswrapper[4730]: I0320 16:49:30.624442 4730 scope.go:117] "RemoveContainer" containerID="9cafb991c7ca35e5ba77af18a045990efcbfc9d1be49bb90913f2673a3398384" Mar 20 16:49:30 crc kubenswrapper[4730]: E0320 16:49:30.624723 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cafb991c7ca35e5ba77af18a045990efcbfc9d1be49bb90913f2673a3398384\": container with ID starting with 9cafb991c7ca35e5ba77af18a045990efcbfc9d1be49bb90913f2673a3398384 not found: ID does not exist" containerID="9cafb991c7ca35e5ba77af18a045990efcbfc9d1be49bb90913f2673a3398384" Mar 20 16:49:30 crc kubenswrapper[4730]: I0320 16:49:30.624765 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cafb991c7ca35e5ba77af18a045990efcbfc9d1be49bb90913f2673a3398384"} err="failed to get container status \"9cafb991c7ca35e5ba77af18a045990efcbfc9d1be49bb90913f2673a3398384\": rpc error: code = NotFound desc = could not find container \"9cafb991c7ca35e5ba77af18a045990efcbfc9d1be49bb90913f2673a3398384\": container with ID starting with 9cafb991c7ca35e5ba77af18a045990efcbfc9d1be49bb90913f2673a3398384 not found: ID does not exist" Mar 20 16:49:30 crc kubenswrapper[4730]: I0320 16:49:30.629320 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6a0115e-9fa1-4809-adb5-76be4e66cd52-catalog-content\") pod \"e6a0115e-9fa1-4809-adb5-76be4e66cd52\" (UID: \"e6a0115e-9fa1-4809-adb5-76be4e66cd52\") " Mar 20 16:49:30 crc kubenswrapper[4730]: I0320 16:49:30.629526 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68gqd\" (UniqueName: \"kubernetes.io/projected/e6a0115e-9fa1-4809-adb5-76be4e66cd52-kube-api-access-68gqd\") pod \"e6a0115e-9fa1-4809-adb5-76be4e66cd52\" (UID: \"e6a0115e-9fa1-4809-adb5-76be4e66cd52\") " Mar 20 16:49:30 crc kubenswrapper[4730]: I0320 16:49:30.629661 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6a0115e-9fa1-4809-adb5-76be4e66cd52-utilities\") pod \"e6a0115e-9fa1-4809-adb5-76be4e66cd52\" (UID: \"e6a0115e-9fa1-4809-adb5-76be4e66cd52\") " Mar 20 16:49:30 crc kubenswrapper[4730]: I0320 16:49:30.630539 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6a0115e-9fa1-4809-adb5-76be4e66cd52-utilities" (OuterVolumeSpecName: "utilities") pod "e6a0115e-9fa1-4809-adb5-76be4e66cd52" (UID: "e6a0115e-9fa1-4809-adb5-76be4e66cd52"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:49:30 crc kubenswrapper[4730]: I0320 16:49:30.638299 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6a0115e-9fa1-4809-adb5-76be4e66cd52-kube-api-access-68gqd" (OuterVolumeSpecName: "kube-api-access-68gqd") pod "e6a0115e-9fa1-4809-adb5-76be4e66cd52" (UID: "e6a0115e-9fa1-4809-adb5-76be4e66cd52"). InnerVolumeSpecName "kube-api-access-68gqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:49:30 crc kubenswrapper[4730]: I0320 16:49:30.731946 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68gqd\" (UniqueName: \"kubernetes.io/projected/e6a0115e-9fa1-4809-adb5-76be4e66cd52-kube-api-access-68gqd\") on node \"crc\" DevicePath \"\"" Mar 20 16:49:30 crc kubenswrapper[4730]: I0320 16:49:30.732344 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6a0115e-9fa1-4809-adb5-76be4e66cd52-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 16:49:30 crc kubenswrapper[4730]: I0320 16:49:30.772649 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6a0115e-9fa1-4809-adb5-76be4e66cd52-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6a0115e-9fa1-4809-adb5-76be4e66cd52" (UID: "e6a0115e-9fa1-4809-adb5-76be4e66cd52"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:49:30 crc kubenswrapper[4730]: I0320 16:49:30.835081 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6a0115e-9fa1-4809-adb5-76be4e66cd52-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 16:49:30 crc kubenswrapper[4730]: I0320 16:49:30.856298 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l6l4h"] Mar 20 16:49:30 crc kubenswrapper[4730]: I0320 16:49:30.865652 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l6l4h"] Mar 20 16:49:31 crc kubenswrapper[4730]: I0320 16:49:31.556110 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6a0115e-9fa1-4809-adb5-76be4e66cd52" path="/var/lib/kubelet/pods/e6a0115e-9fa1-4809-adb5-76be4e66cd52/volumes" Mar 20 16:49:32 crc kubenswrapper[4730]: I0320 16:49:32.534004 4730 scope.go:117] "RemoveContainer" containerID="ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610" Mar 20 16:49:32 crc kubenswrapper[4730]: E0320 16:49:32.535075 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:49:45 crc kubenswrapper[4730]: I0320 16:49:45.533182 4730 scope.go:117] "RemoveContainer" containerID="ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610" Mar 20 16:49:46 crc kubenswrapper[4730]: I0320 16:49:46.674653 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerStarted","Data":"1911f50d0c8e9a77c325e3c9daab7021dcd8581ac14223ba2443e24f00e39603"} Mar 20 16:50:00 crc kubenswrapper[4730]: I0320 16:50:00.168223 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567090-775ds"] Mar 20 16:50:00 crc kubenswrapper[4730]: E0320 16:50:00.169321 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6a0115e-9fa1-4809-adb5-76be4e66cd52" containerName="extract-utilities" Mar 20 16:50:00 crc kubenswrapper[4730]: I0320 16:50:00.169338 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a0115e-9fa1-4809-adb5-76be4e66cd52" containerName="extract-utilities" Mar 20 16:50:00 crc kubenswrapper[4730]: E0320 16:50:00.169365 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9529743d-7cc8-4356-a60a-efb0c0657fc6" containerName="registry-server" Mar 20 16:50:00 crc kubenswrapper[4730]: I0320 16:50:00.169373 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="9529743d-7cc8-4356-a60a-efb0c0657fc6" containerName="registry-server" Mar 20 16:50:00 crc kubenswrapper[4730]: E0320 16:50:00.169389 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9529743d-7cc8-4356-a60a-efb0c0657fc6" containerName="extract-utilities" Mar 20 16:50:00 crc kubenswrapper[4730]: I0320 16:50:00.169400 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="9529743d-7cc8-4356-a60a-efb0c0657fc6" containerName="extract-utilities" Mar 20 16:50:00 crc kubenswrapper[4730]: E0320 16:50:00.169429 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6a0115e-9fa1-4809-adb5-76be4e66cd52" containerName="extract-content" Mar 20 16:50:00 crc kubenswrapper[4730]: I0320 16:50:00.169439 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a0115e-9fa1-4809-adb5-76be4e66cd52" containerName="extract-content" Mar 20 16:50:00 crc kubenswrapper[4730]: E0320 16:50:00.169456 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9529743d-7cc8-4356-a60a-efb0c0657fc6" containerName="extract-content" Mar 20 16:50:00 crc kubenswrapper[4730]: I0320 16:50:00.169599 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="9529743d-7cc8-4356-a60a-efb0c0657fc6" containerName="extract-content" Mar 20 16:50:00 crc kubenswrapper[4730]: E0320 16:50:00.169624 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6a0115e-9fa1-4809-adb5-76be4e66cd52" containerName="registry-server" Mar 20 16:50:00 crc kubenswrapper[4730]: I0320 16:50:00.169632 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a0115e-9fa1-4809-adb5-76be4e66cd52" containerName="registry-server" Mar 20 16:50:00 crc kubenswrapper[4730]: I0320 16:50:00.169894 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="9529743d-7cc8-4356-a60a-efb0c0657fc6" containerName="registry-server" Mar 20 16:50:00 crc kubenswrapper[4730]: I0320 16:50:00.169923 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6a0115e-9fa1-4809-adb5-76be4e66cd52" containerName="registry-server" Mar 20 16:50:00 crc kubenswrapper[4730]: I0320 16:50:00.170791 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567090-775ds" Mar 20 16:50:00 crc kubenswrapper[4730]: I0320 16:50:00.172798 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:50:00 crc kubenswrapper[4730]: I0320 16:50:00.173911 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:50:00 crc kubenswrapper[4730]: I0320 16:50:00.178698 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:50:00 crc kubenswrapper[4730]: I0320 16:50:00.189754 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567090-775ds"] Mar 20 16:50:00 crc kubenswrapper[4730]: I0320 16:50:00.292965 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52p6j\" (UniqueName: \"kubernetes.io/projected/347b4579-1fb8-49d0-97df-83dafdafae60-kube-api-access-52p6j\") pod \"auto-csr-approver-29567090-775ds\" (UID: \"347b4579-1fb8-49d0-97df-83dafdafae60\") " pod="openshift-infra/auto-csr-approver-29567090-775ds" Mar 20 16:50:00 crc kubenswrapper[4730]: I0320 16:50:00.395851 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52p6j\" (UniqueName: \"kubernetes.io/projected/347b4579-1fb8-49d0-97df-83dafdafae60-kube-api-access-52p6j\") pod \"auto-csr-approver-29567090-775ds\" (UID: \"347b4579-1fb8-49d0-97df-83dafdafae60\") " pod="openshift-infra/auto-csr-approver-29567090-775ds" Mar 20 16:50:00 crc kubenswrapper[4730]: I0320 16:50:00.418492 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52p6j\" (UniqueName: \"kubernetes.io/projected/347b4579-1fb8-49d0-97df-83dafdafae60-kube-api-access-52p6j\") pod \"auto-csr-approver-29567090-775ds\" (UID: \"347b4579-1fb8-49d0-97df-83dafdafae60\") " pod="openshift-infra/auto-csr-approver-29567090-775ds" Mar 20 16:50:00 crc kubenswrapper[4730]: I0320 16:50:00.506929 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567090-775ds" Mar 20 16:50:01 crc kubenswrapper[4730]: I0320 16:50:01.000643 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567090-775ds"] Mar 20 16:50:01 crc kubenswrapper[4730]: I0320 16:50:01.841837 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567090-775ds" event={"ID":"347b4579-1fb8-49d0-97df-83dafdafae60","Type":"ContainerStarted","Data":"38c9e55b6b94f60c333409db3275047b6f0ecc0cb0071bf61778bab5d364cfe2"} Mar 20 16:50:03 crc kubenswrapper[4730]: I0320 16:50:03.875946 4730 generic.go:334] "Generic (PLEG): container finished" podID="347b4579-1fb8-49d0-97df-83dafdafae60" containerID="1b59cf8fa45b394473fc97dca5b148ed197f64170983d43d46fd520ecdca6208" exitCode=0 Mar 20 16:50:03 crc kubenswrapper[4730]: I0320 16:50:03.876090 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567090-775ds" event={"ID":"347b4579-1fb8-49d0-97df-83dafdafae60","Type":"ContainerDied","Data":"1b59cf8fa45b394473fc97dca5b148ed197f64170983d43d46fd520ecdca6208"} Mar 20 16:50:05 crc kubenswrapper[4730]: I0320 16:50:05.303800 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567090-775ds" Mar 20 16:50:05 crc kubenswrapper[4730]: I0320 16:50:05.410926 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52p6j\" (UniqueName: \"kubernetes.io/projected/347b4579-1fb8-49d0-97df-83dafdafae60-kube-api-access-52p6j\") pod \"347b4579-1fb8-49d0-97df-83dafdafae60\" (UID: \"347b4579-1fb8-49d0-97df-83dafdafae60\") " Mar 20 16:50:05 crc kubenswrapper[4730]: I0320 16:50:05.425234 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/347b4579-1fb8-49d0-97df-83dafdafae60-kube-api-access-52p6j" (OuterVolumeSpecName: "kube-api-access-52p6j") pod "347b4579-1fb8-49d0-97df-83dafdafae60" (UID: "347b4579-1fb8-49d0-97df-83dafdafae60"). InnerVolumeSpecName "kube-api-access-52p6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:50:05 crc kubenswrapper[4730]: I0320 16:50:05.513575 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52p6j\" (UniqueName: \"kubernetes.io/projected/347b4579-1fb8-49d0-97df-83dafdafae60-kube-api-access-52p6j\") on node \"crc\" DevicePath \"\"" Mar 20 16:50:05 crc kubenswrapper[4730]: I0320 16:50:05.910611 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567090-775ds" event={"ID":"347b4579-1fb8-49d0-97df-83dafdafae60","Type":"ContainerDied","Data":"38c9e55b6b94f60c333409db3275047b6f0ecc0cb0071bf61778bab5d364cfe2"} Mar 20 16:50:05 crc kubenswrapper[4730]: I0320 16:50:05.910647 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38c9e55b6b94f60c333409db3275047b6f0ecc0cb0071bf61778bab5d364cfe2" Mar 20 16:50:05 crc kubenswrapper[4730]: I0320 16:50:05.910685 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567090-775ds" Mar 20 16:50:06 crc kubenswrapper[4730]: I0320 16:50:06.392053 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567084-c4mpd"] Mar 20 16:50:06 crc kubenswrapper[4730]: I0320 16:50:06.402019 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567084-c4mpd"] Mar 20 16:50:07 crc kubenswrapper[4730]: I0320 16:50:07.552491 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51901671-4d27-46c5-9a9d-baf51b2b9c01" path="/var/lib/kubelet/pods/51901671-4d27-46c5-9a9d-baf51b2b9c01/volumes" Mar 20 16:50:51 crc kubenswrapper[4730]: I0320 16:50:51.588194 4730 scope.go:117] "RemoveContainer" containerID="cbdd90e3d11772056ef45ec365a19533f01de2f8c0583c5498c86a843612b56d" Mar 20 16:52:00 crc kubenswrapper[4730]: I0320 16:52:00.150084 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567092-v8dfz"] Mar 20 16:52:00 crc kubenswrapper[4730]: E0320 16:52:00.151417 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347b4579-1fb8-49d0-97df-83dafdafae60" containerName="oc" Mar 20 16:52:00 crc kubenswrapper[4730]: I0320 16:52:00.151441 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="347b4579-1fb8-49d0-97df-83dafdafae60" containerName="oc" Mar 20 16:52:00 crc kubenswrapper[4730]: I0320 16:52:00.151737 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="347b4579-1fb8-49d0-97df-83dafdafae60" containerName="oc" Mar 20 16:52:00 crc kubenswrapper[4730]: I0320 16:52:00.152606 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567092-v8dfz" Mar 20 16:52:00 crc kubenswrapper[4730]: I0320 16:52:00.154341 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:52:00 crc kubenswrapper[4730]: I0320 16:52:00.155506 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:52:00 crc kubenswrapper[4730]: I0320 16:52:00.155801 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:52:00 crc kubenswrapper[4730]: I0320 16:52:00.159946 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567092-v8dfz"] Mar 20 16:52:00 crc kubenswrapper[4730]: I0320 16:52:00.239536 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmwgk\" (UniqueName: \"kubernetes.io/projected/31e0f6b1-5405-420b-941e-4e711281673f-kube-api-access-nmwgk\") pod \"auto-csr-approver-29567092-v8dfz\" (UID: \"31e0f6b1-5405-420b-941e-4e711281673f\") " pod="openshift-infra/auto-csr-approver-29567092-v8dfz" Mar 20 16:52:00 crc kubenswrapper[4730]: I0320 16:52:00.341914 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmwgk\" (UniqueName: \"kubernetes.io/projected/31e0f6b1-5405-420b-941e-4e711281673f-kube-api-access-nmwgk\") pod \"auto-csr-approver-29567092-v8dfz\" (UID: \"31e0f6b1-5405-420b-941e-4e711281673f\") " pod="openshift-infra/auto-csr-approver-29567092-v8dfz" Mar 20 16:52:00 crc kubenswrapper[4730]: I0320 16:52:00.373615 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmwgk\" (UniqueName: \"kubernetes.io/projected/31e0f6b1-5405-420b-941e-4e711281673f-kube-api-access-nmwgk\") pod \"auto-csr-approver-29567092-v8dfz\" (UID: \"31e0f6b1-5405-420b-941e-4e711281673f\") " pod="openshift-infra/auto-csr-approver-29567092-v8dfz" Mar 20 16:52:00 crc kubenswrapper[4730]: I0320 16:52:00.471435 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567092-v8dfz" Mar 20 16:52:00 crc kubenswrapper[4730]: I0320 16:52:00.933591 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 16:52:00 crc kubenswrapper[4730]: I0320 16:52:00.940556 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567092-v8dfz"] Mar 20 16:52:01 crc kubenswrapper[4730]: I0320 16:52:01.132430 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567092-v8dfz" event={"ID":"31e0f6b1-5405-420b-941e-4e711281673f","Type":"ContainerStarted","Data":"a2bdd09093211e701e97bcc2dc049bdcfd33afcd27d21b5c9962c562e342d089"} Mar 20 16:52:03 crc kubenswrapper[4730]: I0320 16:52:03.152068 4730 generic.go:334] "Generic (PLEG): container finished" podID="31e0f6b1-5405-420b-941e-4e711281673f" containerID="ab6be6d2c86b64d8fa302dbb64f113f58f10be72cfc3ad77f609318442cc34dd" exitCode=0 Mar 20 16:52:03 crc kubenswrapper[4730]: I0320 16:52:03.152177 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567092-v8dfz" event={"ID":"31e0f6b1-5405-420b-941e-4e711281673f","Type":"ContainerDied","Data":"ab6be6d2c86b64d8fa302dbb64f113f58f10be72cfc3ad77f609318442cc34dd"} Mar 20 16:52:04 crc kubenswrapper[4730]: I0320 16:52:04.511325 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567092-v8dfz" Mar 20 16:52:04 crc kubenswrapper[4730]: I0320 16:52:04.631591 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmwgk\" (UniqueName: \"kubernetes.io/projected/31e0f6b1-5405-420b-941e-4e711281673f-kube-api-access-nmwgk\") pod \"31e0f6b1-5405-420b-941e-4e711281673f\" (UID: \"31e0f6b1-5405-420b-941e-4e711281673f\") " Mar 20 16:52:04 crc kubenswrapper[4730]: I0320 16:52:04.637634 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31e0f6b1-5405-420b-941e-4e711281673f-kube-api-access-nmwgk" (OuterVolumeSpecName: "kube-api-access-nmwgk") pod "31e0f6b1-5405-420b-941e-4e711281673f" (UID: "31e0f6b1-5405-420b-941e-4e711281673f"). InnerVolumeSpecName "kube-api-access-nmwgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:52:04 crc kubenswrapper[4730]: I0320 16:52:04.734484 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmwgk\" (UniqueName: \"kubernetes.io/projected/31e0f6b1-5405-420b-941e-4e711281673f-kube-api-access-nmwgk\") on node \"crc\" DevicePath \"\"" Mar 20 16:52:05 crc kubenswrapper[4730]: I0320 16:52:05.173766 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567092-v8dfz" event={"ID":"31e0f6b1-5405-420b-941e-4e711281673f","Type":"ContainerDied","Data":"a2bdd09093211e701e97bcc2dc049bdcfd33afcd27d21b5c9962c562e342d089"} Mar 20 16:52:05 crc kubenswrapper[4730]: I0320 16:52:05.173827 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2bdd09093211e701e97bcc2dc049bdcfd33afcd27d21b5c9962c562e342d089" Mar 20 16:52:05 crc kubenswrapper[4730]: I0320 16:52:05.173829 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567092-v8dfz" Mar 20 16:52:05 crc kubenswrapper[4730]: I0320 16:52:05.580271 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567086-bgtzj"] Mar 20 16:52:05 crc kubenswrapper[4730]: I0320 16:52:05.605258 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567086-bgtzj"] Mar 20 16:52:07 crc kubenswrapper[4730]: I0320 16:52:07.544651 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1598084b-3967-4d5d-8911-87b4bbf10965" path="/var/lib/kubelet/pods/1598084b-3967-4d5d-8911-87b4bbf10965/volumes" Mar 20 16:52:12 crc kubenswrapper[4730]: I0320 16:52:12.879821 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:52:12 crc kubenswrapper[4730]: I0320 16:52:12.880119 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:52:42 crc kubenswrapper[4730]: I0320 16:52:42.882944 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:52:42 crc kubenswrapper[4730]: I0320 16:52:42.883498 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:52:51 crc kubenswrapper[4730]: I0320 16:52:51.699310 4730 scope.go:117] "RemoveContainer" containerID="090324d1acddd1e29456802f46699a7cfabedaef8f848dbdf774851d4687bf7f" Mar 20 16:53:12 crc kubenswrapper[4730]: I0320 16:53:12.880552 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:53:12 crc kubenswrapper[4730]: I0320 16:53:12.881466 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:53:12 crc kubenswrapper[4730]: I0320 16:53:12.881529 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 16:53:12 crc kubenswrapper[4730]: I0320 16:53:12.882428 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1911f50d0c8e9a77c325e3c9daab7021dcd8581ac14223ba2443e24f00e39603"} pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 16:53:12 crc kubenswrapper[4730]: I0320 16:53:12.882495 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" containerID="cri-o://1911f50d0c8e9a77c325e3c9daab7021dcd8581ac14223ba2443e24f00e39603" gracePeriod=600 Mar 20 16:53:13 crc kubenswrapper[4730]: I0320 16:53:13.829784 4730 generic.go:334] "Generic (PLEG): container finished" podID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerID="1911f50d0c8e9a77c325e3c9daab7021dcd8581ac14223ba2443e24f00e39603" exitCode=0 Mar 20 16:53:13 crc kubenswrapper[4730]: I0320 16:53:13.830117 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerDied","Data":"1911f50d0c8e9a77c325e3c9daab7021dcd8581ac14223ba2443e24f00e39603"} Mar 20 16:53:13 crc kubenswrapper[4730]: I0320 16:53:13.830145 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerStarted","Data":"96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9"} Mar 20 16:53:13 crc kubenswrapper[4730]: I0320 16:53:13.830161 4730 scope.go:117] "RemoveContainer" containerID="ee13c0427129be645bbda82e57ffd84564f844ed3cec3a2748df09a4a24fe610" Mar 20 16:54:00 crc kubenswrapper[4730]: I0320 16:54:00.168307 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567094-5zljx"] Mar 20 16:54:00 crc kubenswrapper[4730]: E0320 16:54:00.169525 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e0f6b1-5405-420b-941e-4e711281673f" containerName="oc" Mar 20 16:54:00 crc kubenswrapper[4730]: I0320 16:54:00.169546 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e0f6b1-5405-420b-941e-4e711281673f" containerName="oc" Mar 20 16:54:00 crc kubenswrapper[4730]: I0320 16:54:00.169887 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e0f6b1-5405-420b-941e-4e711281673f" containerName="oc" Mar 20 16:54:00 crc kubenswrapper[4730]: I0320 16:54:00.171008 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567094-5zljx" Mar 20 16:54:00 crc kubenswrapper[4730]: I0320 16:54:00.175200 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:54:00 crc kubenswrapper[4730]: I0320 16:54:00.175582 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:54:00 crc kubenswrapper[4730]: I0320 16:54:00.175624 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:54:00 crc kubenswrapper[4730]: I0320 16:54:00.189856 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567094-5zljx"] Mar 20 16:54:00 crc kubenswrapper[4730]: I0320 16:54:00.329933 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lm8m\" (UniqueName: \"kubernetes.io/projected/a009be79-bc2f-45ca-94b9-f0da37a6abdc-kube-api-access-2lm8m\") pod \"auto-csr-approver-29567094-5zljx\" (UID: \"a009be79-bc2f-45ca-94b9-f0da37a6abdc\") " pod="openshift-infra/auto-csr-approver-29567094-5zljx" Mar 20 16:54:00 crc kubenswrapper[4730]: I0320 16:54:00.431753 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lm8m\" (UniqueName: \"kubernetes.io/projected/a009be79-bc2f-45ca-94b9-f0da37a6abdc-kube-api-access-2lm8m\") pod \"auto-csr-approver-29567094-5zljx\" (UID: \"a009be79-bc2f-45ca-94b9-f0da37a6abdc\") " pod="openshift-infra/auto-csr-approver-29567094-5zljx" Mar 20 16:54:00 crc kubenswrapper[4730]: I0320 16:54:00.458669 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lm8m\" (UniqueName: \"kubernetes.io/projected/a009be79-bc2f-45ca-94b9-f0da37a6abdc-kube-api-access-2lm8m\") pod \"auto-csr-approver-29567094-5zljx\" (UID: \"a009be79-bc2f-45ca-94b9-f0da37a6abdc\") " pod="openshift-infra/auto-csr-approver-29567094-5zljx" Mar 20 16:54:00 crc kubenswrapper[4730]: I0320 16:54:00.498989 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567094-5zljx" Mar 20 16:54:00 crc kubenswrapper[4730]: I0320 16:54:00.954049 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567094-5zljx"] Mar 20 16:54:01 crc kubenswrapper[4730]: I0320 16:54:01.326473 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567094-5zljx" event={"ID":"a009be79-bc2f-45ca-94b9-f0da37a6abdc","Type":"ContainerStarted","Data":"2e3f3ff8766b4b70c26134bd30d13d680af318f12ab19620310fa8fbf92370ea"} Mar 20 16:54:02 crc kubenswrapper[4730]: I0320 16:54:02.340480 4730 generic.go:334] "Generic (PLEG): container finished" podID="a009be79-bc2f-45ca-94b9-f0da37a6abdc" containerID="bfe54023e94fe2434c4e1c76acbd4f6a1cd0b3f0a5a82a87fd8b931a2f1901c9" exitCode=0 Mar 20 16:54:02 crc kubenswrapper[4730]: I0320 16:54:02.340957 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567094-5zljx" event={"ID":"a009be79-bc2f-45ca-94b9-f0da37a6abdc","Type":"ContainerDied","Data":"bfe54023e94fe2434c4e1c76acbd4f6a1cd0b3f0a5a82a87fd8b931a2f1901c9"} Mar 20 16:54:03 crc kubenswrapper[4730]: I0320 16:54:03.718140 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567094-5zljx" Mar 20 16:54:03 crc kubenswrapper[4730]: I0320 16:54:03.803961 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lm8m\" (UniqueName: \"kubernetes.io/projected/a009be79-bc2f-45ca-94b9-f0da37a6abdc-kube-api-access-2lm8m\") pod \"a009be79-bc2f-45ca-94b9-f0da37a6abdc\" (UID: \"a009be79-bc2f-45ca-94b9-f0da37a6abdc\") " Mar 20 16:54:03 crc kubenswrapper[4730]: I0320 16:54:03.811712 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a009be79-bc2f-45ca-94b9-f0da37a6abdc-kube-api-access-2lm8m" (OuterVolumeSpecName: "kube-api-access-2lm8m") pod "a009be79-bc2f-45ca-94b9-f0da37a6abdc" (UID: "a009be79-bc2f-45ca-94b9-f0da37a6abdc"). InnerVolumeSpecName "kube-api-access-2lm8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:54:03 crc kubenswrapper[4730]: I0320 16:54:03.907572 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lm8m\" (UniqueName: \"kubernetes.io/projected/a009be79-bc2f-45ca-94b9-f0da37a6abdc-kube-api-access-2lm8m\") on node \"crc\" DevicePath \"\"" Mar 20 16:54:04 crc kubenswrapper[4730]: I0320 16:54:04.364927 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567094-5zljx" event={"ID":"a009be79-bc2f-45ca-94b9-f0da37a6abdc","Type":"ContainerDied","Data":"2e3f3ff8766b4b70c26134bd30d13d680af318f12ab19620310fa8fbf92370ea"} Mar 20 16:54:04 crc kubenswrapper[4730]: I0320 16:54:04.364963 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e3f3ff8766b4b70c26134bd30d13d680af318f12ab19620310fa8fbf92370ea" Mar 20 16:54:04 crc kubenswrapper[4730]: I0320 16:54:04.365023 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567094-5zljx" Mar 20 16:54:04 crc kubenswrapper[4730]: I0320 16:54:04.787667 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567088-jzf6l"] Mar 20 16:54:04 crc kubenswrapper[4730]: I0320 16:54:04.796804 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567088-jzf6l"] Mar 20 16:54:05 crc kubenswrapper[4730]: I0320 16:54:05.549377 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="423d9abb-9507-4e8e-aa00-42b3d34328ed" path="/var/lib/kubelet/pods/423d9abb-9507-4e8e-aa00-42b3d34328ed/volumes" Mar 20 16:54:51 crc kubenswrapper[4730]: I0320 16:54:51.807191 4730 scope.go:117] "RemoveContainer" containerID="7b312e45c65bdab74120878c5bbe6f1323de4c86f0295b5909d9931e6d7a0af0" Mar 20 16:55:42 crc kubenswrapper[4730]: I0320 16:55:42.882333 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:55:42 crc kubenswrapper[4730]: I0320 16:55:42.882951 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:56:00 crc kubenswrapper[4730]: I0320 16:56:00.140293 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567096-xvkcr"] Mar 20 16:56:00 crc kubenswrapper[4730]: E0320 16:56:00.141108 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a009be79-bc2f-45ca-94b9-f0da37a6abdc" containerName="oc" Mar 20 16:56:00 crc kubenswrapper[4730]: I0320 16:56:00.141120 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="a009be79-bc2f-45ca-94b9-f0da37a6abdc" containerName="oc" Mar 20 16:56:00 crc kubenswrapper[4730]: I0320 16:56:00.141330 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="a009be79-bc2f-45ca-94b9-f0da37a6abdc" containerName="oc" Mar 20 16:56:00 crc kubenswrapper[4730]: I0320 16:56:00.141964 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567096-xvkcr" Mar 20 16:56:00 crc kubenswrapper[4730]: I0320 16:56:00.144234 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:56:00 crc kubenswrapper[4730]: I0320 16:56:00.145508 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:56:00 crc kubenswrapper[4730]: I0320 16:56:00.145862 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:56:00 crc kubenswrapper[4730]: I0320 16:56:00.149301 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567096-xvkcr"] Mar 20 16:56:00 crc kubenswrapper[4730]: I0320 16:56:00.282180 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxdp8\" (UniqueName: \"kubernetes.io/projected/f85cd1ac-f48f-46a3-81e3-f82b73719cb1-kube-api-access-zxdp8\") pod \"auto-csr-approver-29567096-xvkcr\" (UID: \"f85cd1ac-f48f-46a3-81e3-f82b73719cb1\") " pod="openshift-infra/auto-csr-approver-29567096-xvkcr" Mar 20 16:56:00 crc kubenswrapper[4730]: I0320 16:56:00.384228 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxdp8\" (UniqueName: \"kubernetes.io/projected/f85cd1ac-f48f-46a3-81e3-f82b73719cb1-kube-api-access-zxdp8\") pod \"auto-csr-approver-29567096-xvkcr\" (UID: \"f85cd1ac-f48f-46a3-81e3-f82b73719cb1\") " pod="openshift-infra/auto-csr-approver-29567096-xvkcr" Mar 20 16:56:00 crc kubenswrapper[4730]: I0320 16:56:00.498762 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxdp8\" (UniqueName: \"kubernetes.io/projected/f85cd1ac-f48f-46a3-81e3-f82b73719cb1-kube-api-access-zxdp8\") pod \"auto-csr-approver-29567096-xvkcr\" (UID: \"f85cd1ac-f48f-46a3-81e3-f82b73719cb1\") " pod="openshift-infra/auto-csr-approver-29567096-xvkcr" Mar 20 16:56:00 crc kubenswrapper[4730]: I0320 16:56:00.760610 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567096-xvkcr" Mar 20 16:56:01 crc kubenswrapper[4730]: I0320 16:56:01.223486 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567096-xvkcr"] Mar 20 16:56:01 crc kubenswrapper[4730]: I0320 16:56:01.645832 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567096-xvkcr" event={"ID":"f85cd1ac-f48f-46a3-81e3-f82b73719cb1","Type":"ContainerStarted","Data":"fc1a75c2059b3b96b5c3d8397b9f0be76a34830abeeee52f031a757c8923ae9f"} Mar 20 16:56:02 crc kubenswrapper[4730]: I0320 16:56:02.656338 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567096-xvkcr" event={"ID":"f85cd1ac-f48f-46a3-81e3-f82b73719cb1","Type":"ContainerStarted","Data":"ad05ce67547cd2c7c1cc69cf885883fc4049286efb953f0b2d8378cbd56d924f"} Mar 20 16:56:02 crc kubenswrapper[4730]: I0320 16:56:02.682316 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29567096-xvkcr" podStartSLOduration=1.574278129 podStartE2EDuration="2.682289862s" podCreationTimestamp="2026-03-20 16:56:00 +0000 UTC" firstStartedPulling="2026-03-20 16:56:01.244894208 +0000 UTC m=+4620.458265577" lastFinishedPulling="2026-03-20 16:56:02.352905941 +0000 UTC m=+4621.566277310" observedRunningTime="2026-03-20 16:56:02.670707783 +0000 UTC m=+4621.884079142" watchObservedRunningTime="2026-03-20 16:56:02.682289862 +0000 UTC m=+4621.895661241" Mar 20 16:56:03 crc kubenswrapper[4730]: I0320 16:56:03.669001 4730 generic.go:334] "Generic (PLEG): container finished" podID="f85cd1ac-f48f-46a3-81e3-f82b73719cb1" containerID="ad05ce67547cd2c7c1cc69cf885883fc4049286efb953f0b2d8378cbd56d924f" exitCode=0 Mar 20 16:56:03 crc kubenswrapper[4730]: I0320 16:56:03.669424 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567096-xvkcr" event={"ID":"f85cd1ac-f48f-46a3-81e3-f82b73719cb1","Type":"ContainerDied","Data":"ad05ce67547cd2c7c1cc69cf885883fc4049286efb953f0b2d8378cbd56d924f"} Mar 20 16:56:05 crc kubenswrapper[4730]: I0320 16:56:05.127085 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567096-xvkcr" Mar 20 16:56:05 crc kubenswrapper[4730]: I0320 16:56:05.308032 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxdp8\" (UniqueName: \"kubernetes.io/projected/f85cd1ac-f48f-46a3-81e3-f82b73719cb1-kube-api-access-zxdp8\") pod \"f85cd1ac-f48f-46a3-81e3-f82b73719cb1\" (UID: \"f85cd1ac-f48f-46a3-81e3-f82b73719cb1\") " Mar 20 16:56:05 crc kubenswrapper[4730]: I0320 16:56:05.334409 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f85cd1ac-f48f-46a3-81e3-f82b73719cb1-kube-api-access-zxdp8" (OuterVolumeSpecName: "kube-api-access-zxdp8") pod "f85cd1ac-f48f-46a3-81e3-f82b73719cb1" (UID: "f85cd1ac-f48f-46a3-81e3-f82b73719cb1"). InnerVolumeSpecName "kube-api-access-zxdp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:56:05 crc kubenswrapper[4730]: I0320 16:56:05.410843 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxdp8\" (UniqueName: \"kubernetes.io/projected/f85cd1ac-f48f-46a3-81e3-f82b73719cb1-kube-api-access-zxdp8\") on node \"crc\" DevicePath \"\"" Mar 20 16:56:05 crc kubenswrapper[4730]: I0320 16:56:05.691024 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567096-xvkcr" event={"ID":"f85cd1ac-f48f-46a3-81e3-f82b73719cb1","Type":"ContainerDied","Data":"fc1a75c2059b3b96b5c3d8397b9f0be76a34830abeeee52f031a757c8923ae9f"} Mar 20 16:56:05 crc kubenswrapper[4730]: I0320 16:56:05.691058 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc1a75c2059b3b96b5c3d8397b9f0be76a34830abeeee52f031a757c8923ae9f" Mar 20 16:56:05 crc kubenswrapper[4730]: I0320 16:56:05.691127 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567096-xvkcr" Mar 20 16:56:05 crc kubenswrapper[4730]: I0320 16:56:05.753593 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567090-775ds"] Mar 20 16:56:05 crc kubenswrapper[4730]: I0320 16:56:05.762485 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567090-775ds"] Mar 20 16:56:07 crc kubenswrapper[4730]: I0320 16:56:07.559556 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="347b4579-1fb8-49d0-97df-83dafdafae60" path="/var/lib/kubelet/pods/347b4579-1fb8-49d0-97df-83dafdafae60/volumes" Mar 20 16:56:08 crc kubenswrapper[4730]: I0320 16:56:08.850922 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jch7c"] Mar 20 16:56:08 crc kubenswrapper[4730]: E0320 16:56:08.852089 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85cd1ac-f48f-46a3-81e3-f82b73719cb1" containerName="oc" Mar 20 16:56:08 crc kubenswrapper[4730]: I0320 16:56:08.852106 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85cd1ac-f48f-46a3-81e3-f82b73719cb1" containerName="oc" Mar 20 16:56:08 crc kubenswrapper[4730]: I0320 16:56:08.852446 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85cd1ac-f48f-46a3-81e3-f82b73719cb1" containerName="oc" Mar 20 16:56:08 crc kubenswrapper[4730]: I0320 16:56:08.854347 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jch7c" Mar 20 16:56:08 crc kubenswrapper[4730]: I0320 16:56:08.862704 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jch7c"] Mar 20 16:56:08 crc kubenswrapper[4730]: I0320 16:56:08.993576 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdxsm\" (UniqueName: \"kubernetes.io/projected/8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202-kube-api-access-rdxsm\") pod \"redhat-marketplace-jch7c\" (UID: \"8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202\") " pod="openshift-marketplace/redhat-marketplace-jch7c" Mar 20 16:56:08 crc kubenswrapper[4730]: I0320 16:56:08.993681 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202-utilities\") pod \"redhat-marketplace-jch7c\" (UID: \"8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202\") " pod="openshift-marketplace/redhat-marketplace-jch7c" Mar 20 16:56:08 crc kubenswrapper[4730]: I0320 16:56:08.993752 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202-catalog-content\") pod \"redhat-marketplace-jch7c\" (UID: \"8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202\") " pod="openshift-marketplace/redhat-marketplace-jch7c" Mar 20 16:56:09 crc kubenswrapper[4730]: I0320 16:56:09.096094 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202-catalog-content\") pod \"redhat-marketplace-jch7c\" (UID: \"8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202\") " pod="openshift-marketplace/redhat-marketplace-jch7c" Mar 20 16:56:09 crc kubenswrapper[4730]: I0320 16:56:09.096204 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdxsm\" (UniqueName: \"kubernetes.io/projected/8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202-kube-api-access-rdxsm\") pod \"redhat-marketplace-jch7c\" (UID: \"8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202\") " pod="openshift-marketplace/redhat-marketplace-jch7c" Mar 20 16:56:09 crc kubenswrapper[4730]: I0320 16:56:09.096336 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202-utilities\") pod \"redhat-marketplace-jch7c\" (UID: \"8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202\") " pod="openshift-marketplace/redhat-marketplace-jch7c" Mar 20 16:56:09 crc kubenswrapper[4730]: I0320 16:56:09.096927 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202-catalog-content\") pod \"redhat-marketplace-jch7c\" (UID: \"8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202\") " pod="openshift-marketplace/redhat-marketplace-jch7c" Mar 20 16:56:09 crc kubenswrapper[4730]: I0320 16:56:09.096966 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202-utilities\") pod \"redhat-marketplace-jch7c\" (UID: \"8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202\") " pod="openshift-marketplace/redhat-marketplace-jch7c" Mar 20 16:56:09 crc kubenswrapper[4730]: I0320 16:56:09.149655 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdxsm\" (UniqueName: \"kubernetes.io/projected/8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202-kube-api-access-rdxsm\") pod \"redhat-marketplace-jch7c\" (UID: \"8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202\") " pod="openshift-marketplace/redhat-marketplace-jch7c" Mar 20 16:56:09 crc kubenswrapper[4730]: I0320 16:56:09.188613 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jch7c" Mar 20 16:56:09 crc kubenswrapper[4730]: I0320 16:56:09.710949 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jch7c"] Mar 20 16:56:09 crc kubenswrapper[4730]: I0320 16:56:09.730582 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jch7c" event={"ID":"8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202","Type":"ContainerStarted","Data":"76c65f9a3b0e0d371d9088499eb8c2a8e4892c80999af850b2c400a7c2669c0c"} Mar 20 16:56:10 crc kubenswrapper[4730]: I0320 16:56:10.744582 4730 generic.go:334] "Generic (PLEG): container finished" podID="8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202" containerID="4294471d41f464796f65094c95dd7a1856ebd4574db318d0dc4012ad1596dd16" exitCode=0 Mar 20 16:56:10 crc kubenswrapper[4730]: I0320 16:56:10.744673 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jch7c" event={"ID":"8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202","Type":"ContainerDied","Data":"4294471d41f464796f65094c95dd7a1856ebd4574db318d0dc4012ad1596dd16"} Mar 20 16:56:12 crc kubenswrapper[4730]: I0320 16:56:12.765726 4730 generic.go:334] "Generic (PLEG): container finished" podID="8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202" containerID="c19f34ef26b5cfdd9a433322da9f8c1935c8cb8dfc879427725cad25c242ae4b" exitCode=0 Mar 20 16:56:12 crc kubenswrapper[4730]: I0320 16:56:12.765944 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jch7c" event={"ID":"8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202","Type":"ContainerDied","Data":"c19f34ef26b5cfdd9a433322da9f8c1935c8cb8dfc879427725cad25c242ae4b"} Mar 20 16:56:12 crc kubenswrapper[4730]: I0320 16:56:12.880616 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:56:12 crc kubenswrapper[4730]: I0320 16:56:12.881044 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:56:14 crc kubenswrapper[4730]: I0320 16:56:14.817513 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jch7c" event={"ID":"8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202","Type":"ContainerStarted","Data":"0accd96625b6010b8f1ed489158a726d6acd23541ee8c36820a6f7d75e099aa5"} Mar 20 16:56:14 crc kubenswrapper[4730]: I0320 16:56:14.837601 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jch7c" podStartSLOduration=4.244348059 podStartE2EDuration="6.837580074s" podCreationTimestamp="2026-03-20 16:56:08 +0000 UTC" firstStartedPulling="2026-03-20 16:56:10.747952259 +0000 UTC m=+4629.961323628" lastFinishedPulling="2026-03-20 16:56:13.341184234 +0000 UTC m=+4632.554555643" observedRunningTime="2026-03-20 16:56:14.834184028 +0000 UTC m=+4634.047555407" watchObservedRunningTime="2026-03-20 16:56:14.837580074 +0000 UTC m=+4634.050951443" Mar 20 16:56:19 crc kubenswrapper[4730]: I0320 16:56:19.189273 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jch7c" Mar 20 16:56:19 crc kubenswrapper[4730]: I0320 16:56:19.189927 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jch7c" Mar 20 16:56:19 crc kubenswrapper[4730]: I0320 16:56:19.235666 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jch7c" Mar 20 16:56:19 crc kubenswrapper[4730]: I0320 16:56:19.949444 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jch7c" Mar 20 16:56:20 crc kubenswrapper[4730]: I0320 16:56:20.027428 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jch7c"] Mar 20 16:56:21 crc kubenswrapper[4730]: I0320 16:56:21.898574 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jch7c" podUID="8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202" containerName="registry-server" containerID="cri-o://0accd96625b6010b8f1ed489158a726d6acd23541ee8c36820a6f7d75e099aa5" gracePeriod=2 Mar 20 16:56:22 crc kubenswrapper[4730]: I0320 16:56:22.909664 4730 generic.go:334] "Generic (PLEG): container finished" podID="8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202" containerID="0accd96625b6010b8f1ed489158a726d6acd23541ee8c36820a6f7d75e099aa5" exitCode=0 Mar 20 16:56:22 crc kubenswrapper[4730]: I0320 16:56:22.909700 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jch7c" event={"ID":"8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202","Type":"ContainerDied","Data":"0accd96625b6010b8f1ed489158a726d6acd23541ee8c36820a6f7d75e099aa5"} Mar 20 16:56:23 crc kubenswrapper[4730]: I0320 16:56:23.235354 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jch7c" Mar 20 16:56:23 crc kubenswrapper[4730]: I0320 16:56:23.432909 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202-utilities\") pod \"8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202\" (UID: \"8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202\") " Mar 20 16:56:23 crc kubenswrapper[4730]: I0320 16:56:23.433025 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202-catalog-content\") pod \"8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202\" (UID: \"8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202\") " Mar 20 16:56:23 crc kubenswrapper[4730]: I0320 16:56:23.433145 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdxsm\" (UniqueName: \"kubernetes.io/projected/8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202-kube-api-access-rdxsm\") pod \"8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202\" (UID: \"8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202\") " Mar 20 16:56:23 crc kubenswrapper[4730]: I0320 16:56:23.435018 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202-utilities" (OuterVolumeSpecName: "utilities") pod "8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202" (UID: "8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:56:23 crc kubenswrapper[4730]: I0320 16:56:23.438518 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202-kube-api-access-rdxsm" (OuterVolumeSpecName: "kube-api-access-rdxsm") pod "8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202" (UID: "8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202"). InnerVolumeSpecName "kube-api-access-rdxsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:56:23 crc kubenswrapper[4730]: I0320 16:56:23.458645 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202" (UID: "8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:56:23 crc kubenswrapper[4730]: I0320 16:56:23.534859 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdxsm\" (UniqueName: \"kubernetes.io/projected/8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202-kube-api-access-rdxsm\") on node \"crc\" DevicePath \"\"" Mar 20 16:56:23 crc kubenswrapper[4730]: I0320 16:56:23.534894 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 16:56:23 crc kubenswrapper[4730]: I0320 16:56:23.534908 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 16:56:23 crc kubenswrapper[4730]: I0320 16:56:23.926479 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jch7c" event={"ID":"8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202","Type":"ContainerDied","Data":"76c65f9a3b0e0d371d9088499eb8c2a8e4892c80999af850b2c400a7c2669c0c"} Mar 20 16:56:23 crc kubenswrapper[4730]: I0320 16:56:23.926882 4730 scope.go:117] "RemoveContainer" containerID="0accd96625b6010b8f1ed489158a726d6acd23541ee8c36820a6f7d75e099aa5" Mar 20 16:56:23 crc kubenswrapper[4730]: I0320 16:56:23.926660 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jch7c" Mar 20 16:56:23 crc kubenswrapper[4730]: I0320 16:56:23.980407 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jch7c"] Mar 20 16:56:23 crc kubenswrapper[4730]: I0320 16:56:23.980918 4730 scope.go:117] "RemoveContainer" containerID="c19f34ef26b5cfdd9a433322da9f8c1935c8cb8dfc879427725cad25c242ae4b" Mar 20 16:56:24 crc kubenswrapper[4730]: I0320 16:56:24.003318 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jch7c"] Mar 20 16:56:24 crc kubenswrapper[4730]: I0320 16:56:24.013973 4730 scope.go:117] "RemoveContainer" containerID="4294471d41f464796f65094c95dd7a1856ebd4574db318d0dc4012ad1596dd16" Mar 20 16:56:25 crc kubenswrapper[4730]: I0320 16:56:25.549697 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202" path="/var/lib/kubelet/pods/8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202/volumes" Mar 20 16:56:42 crc kubenswrapper[4730]: I0320 16:56:42.880154 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 16:56:42 crc kubenswrapper[4730]: I0320 16:56:42.880685 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 16:56:42 crc kubenswrapper[4730]: I0320 16:56:42.880730 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 16:56:42 crc kubenswrapper[4730]: I0320 16:56:42.881517 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9"} pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 16:56:42 crc kubenswrapper[4730]: I0320 16:56:42.881568 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" containerID="cri-o://96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9" gracePeriod=600 Mar 20 16:56:43 crc kubenswrapper[4730]: E0320 16:56:43.006923 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:56:43 crc kubenswrapper[4730]: I0320 16:56:43.114490 4730 generic.go:334] "Generic (PLEG): container finished" podID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerID="96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9" exitCode=0 Mar 20 16:56:43 crc kubenswrapper[4730]: I0320 16:56:43.114541 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerDied","Data":"96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9"} Mar 20 16:56:43 crc kubenswrapper[4730]: I0320 16:56:43.114585 4730 scope.go:117] "RemoveContainer" containerID="1911f50d0c8e9a77c325e3c9daab7021dcd8581ac14223ba2443e24f00e39603" Mar 20 16:56:43 crc kubenswrapper[4730]: I0320 16:56:43.114996 4730 scope.go:117] "RemoveContainer" containerID="96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9" Mar 20 16:56:43 crc kubenswrapper[4730]: E0320 16:56:43.115312 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:56:51 crc kubenswrapper[4730]: I0320 16:56:51.911527 4730 scope.go:117] "RemoveContainer" containerID="1b59cf8fa45b394473fc97dca5b148ed197f64170983d43d46fd520ecdca6208" Mar 20 16:56:55 crc kubenswrapper[4730]: I0320 16:56:55.534572 4730 scope.go:117] "RemoveContainer" containerID="96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9" Mar 20 16:56:55 crc kubenswrapper[4730]: E0320 16:56:55.535452 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:57:07 crc kubenswrapper[4730]: I0320 16:57:07.533773 4730 scope.go:117] "RemoveContainer" containerID="96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9" Mar 20 16:57:07 crc kubenswrapper[4730]: E0320 16:57:07.534514 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:57:19 crc kubenswrapper[4730]: I0320 16:57:19.533407 4730 scope.go:117] "RemoveContainer" containerID="96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9" Mar 20 16:57:19 crc kubenswrapper[4730]: E0320 16:57:19.534349 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:57:33 crc kubenswrapper[4730]: I0320 16:57:33.533506 4730 scope.go:117] "RemoveContainer" containerID="96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9" Mar 20 16:57:33 crc kubenswrapper[4730]: E0320 16:57:33.534306 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:57:45 crc kubenswrapper[4730]: I0320 16:57:45.533753 4730 scope.go:117] "RemoveContainer" containerID="96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9" Mar 20 16:57:45 crc kubenswrapper[4730]: E0320 16:57:45.534946 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:57:59 crc kubenswrapper[4730]: I0320 16:57:59.534063 4730 scope.go:117] "RemoveContainer" containerID="96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9" Mar 20 16:57:59 crc kubenswrapper[4730]: E0320 16:57:59.535109 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:58:00 crc kubenswrapper[4730]: I0320 16:58:00.151496 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567098-4wf7k"] Mar 20 16:58:00 crc kubenswrapper[4730]: E0320 16:58:00.151961 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202" containerName="extract-utilities" Mar 20 16:58:00 crc kubenswrapper[4730]: I0320 16:58:00.151983 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202" containerName="extract-utilities" Mar 20 16:58:00 crc kubenswrapper[4730]: E0320 16:58:00.152016 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202" containerName="extract-content" Mar 20 16:58:00 crc kubenswrapper[4730]: I0320 16:58:00.152025 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202" containerName="extract-content" Mar 20 16:58:00 crc kubenswrapper[4730]: E0320 16:58:00.152047 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202" containerName="registry-server" Mar 20 16:58:00 crc kubenswrapper[4730]: I0320 16:58:00.152059 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202" containerName="registry-server" Mar 20 16:58:00 crc kubenswrapper[4730]: I0320 16:58:00.152298 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e8e0ea6-d9fa-4ce9-b5a9-f9ae60c75202" containerName="registry-server" Mar 20 16:58:00 crc kubenswrapper[4730]: I0320 16:58:00.153096 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567098-4wf7k" Mar 20 16:58:00 crc kubenswrapper[4730]: I0320 16:58:00.157749 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 16:58:00 crc kubenswrapper[4730]: I0320 16:58:00.158608 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 16:58:00 crc kubenswrapper[4730]: I0320 16:58:00.158652 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 16:58:00 crc kubenswrapper[4730]: I0320 16:58:00.170262 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567098-4wf7k"] Mar 20 16:58:00 crc kubenswrapper[4730]: I0320 16:58:00.289498 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgfzx\" (UniqueName: \"kubernetes.io/projected/a8d7ea06-69cc-41b0-afc3-fb5f3e55049b-kube-api-access-bgfzx\") pod \"auto-csr-approver-29567098-4wf7k\" (UID: \"a8d7ea06-69cc-41b0-afc3-fb5f3e55049b\") " pod="openshift-infra/auto-csr-approver-29567098-4wf7k" Mar 20 16:58:00 crc kubenswrapper[4730]: I0320 16:58:00.392188 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgfzx\" (UniqueName: \"kubernetes.io/projected/a8d7ea06-69cc-41b0-afc3-fb5f3e55049b-kube-api-access-bgfzx\") pod \"auto-csr-approver-29567098-4wf7k\" (UID: \"a8d7ea06-69cc-41b0-afc3-fb5f3e55049b\") " pod="openshift-infra/auto-csr-approver-29567098-4wf7k" Mar 20 16:58:00 crc kubenswrapper[4730]: I0320 16:58:00.414976 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgfzx\" (UniqueName: \"kubernetes.io/projected/a8d7ea06-69cc-41b0-afc3-fb5f3e55049b-kube-api-access-bgfzx\") pod \"auto-csr-approver-29567098-4wf7k\" (UID: \"a8d7ea06-69cc-41b0-afc3-fb5f3e55049b\") " pod="openshift-infra/auto-csr-approver-29567098-4wf7k" Mar 20 16:58:00 crc kubenswrapper[4730]: I0320 16:58:00.480604 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567098-4wf7k" Mar 20 16:58:00 crc kubenswrapper[4730]: I0320 16:58:00.959186 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567098-4wf7k"] Mar 20 16:58:00 crc kubenswrapper[4730]: I0320 16:58:00.962441 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 16:58:01 crc kubenswrapper[4730]: I0320 16:58:01.958377 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567098-4wf7k" event={"ID":"a8d7ea06-69cc-41b0-afc3-fb5f3e55049b","Type":"ContainerStarted","Data":"fb0cbff14033ddab7ef9c034ebc0a695ee494411468e24a895e3a77bb50c4501"} Mar 20 16:58:02 crc kubenswrapper[4730]: I0320 16:58:02.972097 4730 generic.go:334] "Generic (PLEG): container finished" podID="a8d7ea06-69cc-41b0-afc3-fb5f3e55049b" containerID="7caef3ce3ae643ea42e6304dc53d81d43ac8e7cc2d51fc8c56b8771cdad2f656" exitCode=0 Mar 20 16:58:02 crc kubenswrapper[4730]: I0320 16:58:02.972198 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567098-4wf7k" event={"ID":"a8d7ea06-69cc-41b0-afc3-fb5f3e55049b","Type":"ContainerDied","Data":"7caef3ce3ae643ea42e6304dc53d81d43ac8e7cc2d51fc8c56b8771cdad2f656"} Mar 20 16:58:04 crc kubenswrapper[4730]: I0320 16:58:04.378314 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567098-4wf7k" Mar 20 16:58:04 crc kubenswrapper[4730]: I0320 16:58:04.488611 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgfzx\" (UniqueName: \"kubernetes.io/projected/a8d7ea06-69cc-41b0-afc3-fb5f3e55049b-kube-api-access-bgfzx\") pod \"a8d7ea06-69cc-41b0-afc3-fb5f3e55049b\" (UID: \"a8d7ea06-69cc-41b0-afc3-fb5f3e55049b\") " Mar 20 16:58:04 crc kubenswrapper[4730]: I0320 16:58:04.498697 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8d7ea06-69cc-41b0-afc3-fb5f3e55049b-kube-api-access-bgfzx" (OuterVolumeSpecName: "kube-api-access-bgfzx") pod "a8d7ea06-69cc-41b0-afc3-fb5f3e55049b" (UID: "a8d7ea06-69cc-41b0-afc3-fb5f3e55049b"). InnerVolumeSpecName "kube-api-access-bgfzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:58:04 crc kubenswrapper[4730]: I0320 16:58:04.591602 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgfzx\" (UniqueName: \"kubernetes.io/projected/a8d7ea06-69cc-41b0-afc3-fb5f3e55049b-kube-api-access-bgfzx\") on node \"crc\" DevicePath \"\"" Mar 20 16:58:04 crc kubenswrapper[4730]: I0320 16:58:04.991918 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567098-4wf7k" event={"ID":"a8d7ea06-69cc-41b0-afc3-fb5f3e55049b","Type":"ContainerDied","Data":"fb0cbff14033ddab7ef9c034ebc0a695ee494411468e24a895e3a77bb50c4501"} Mar 20 16:58:04 crc kubenswrapper[4730]: I0320 16:58:04.992211 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb0cbff14033ddab7ef9c034ebc0a695ee494411468e24a895e3a77bb50c4501" Mar 20 16:58:04 crc kubenswrapper[4730]: I0320 16:58:04.991947 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567098-4wf7k" Mar 20 16:58:05 crc kubenswrapper[4730]: I0320 16:58:05.446929 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567092-v8dfz"] Mar 20 16:58:05 crc kubenswrapper[4730]: I0320 16:58:05.456518 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567092-v8dfz"] Mar 20 16:58:05 crc kubenswrapper[4730]: I0320 16:58:05.546537 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31e0f6b1-5405-420b-941e-4e711281673f" path="/var/lib/kubelet/pods/31e0f6b1-5405-420b-941e-4e711281673f/volumes" Mar 20 16:58:14 crc kubenswrapper[4730]: I0320 16:58:14.532696 4730 scope.go:117] "RemoveContainer" containerID="96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9" Mar 20 16:58:14 crc kubenswrapper[4730]: E0320 16:58:14.533346 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:58:27 crc kubenswrapper[4730]: I0320 16:58:27.533839 4730 scope.go:117] "RemoveContainer" containerID="96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9" Mar 20 16:58:27 crc kubenswrapper[4730]: E0320 16:58:27.534649 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:58:39 crc kubenswrapper[4730]: I0320 16:58:39.533373 4730 scope.go:117] "RemoveContainer" containerID="96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9" Mar 20 16:58:39 crc kubenswrapper[4730]: E0320 16:58:39.535325 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:58:50 crc kubenswrapper[4730]: I0320 16:58:50.351489 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cft2p"] Mar 20 16:58:50 crc kubenswrapper[4730]: E0320 16:58:50.353527 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8d7ea06-69cc-41b0-afc3-fb5f3e55049b" containerName="oc" Mar 20 16:58:50 crc kubenswrapper[4730]: I0320 16:58:50.353550 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8d7ea06-69cc-41b0-afc3-fb5f3e55049b" containerName="oc" Mar 20 16:58:50 crc kubenswrapper[4730]: I0320 16:58:50.353762 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8d7ea06-69cc-41b0-afc3-fb5f3e55049b" containerName="oc" Mar 20 16:58:50 crc kubenswrapper[4730]: I0320 16:58:50.355176 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cft2p" Mar 20 16:58:50 crc kubenswrapper[4730]: I0320 16:58:50.379102 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cft2p"] Mar 20 16:58:50 crc kubenswrapper[4730]: I0320 16:58:50.481720 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a16e21d-182d-4c13-9089-49aceb2bf64e-catalog-content\") pod \"redhat-operators-cft2p\" (UID: \"7a16e21d-182d-4c13-9089-49aceb2bf64e\") " pod="openshift-marketplace/redhat-operators-cft2p" Mar 20 16:58:50 crc kubenswrapper[4730]: I0320 16:58:50.482499 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a16e21d-182d-4c13-9089-49aceb2bf64e-utilities\") pod \"redhat-operators-cft2p\" (UID: \"7a16e21d-182d-4c13-9089-49aceb2bf64e\") " pod="openshift-marketplace/redhat-operators-cft2p" Mar 20 16:58:50 crc kubenswrapper[4730]: I0320 16:58:50.482630 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlklg\" (UniqueName: \"kubernetes.io/projected/7a16e21d-182d-4c13-9089-49aceb2bf64e-kube-api-access-jlklg\") pod \"redhat-operators-cft2p\" (UID: \"7a16e21d-182d-4c13-9089-49aceb2bf64e\") " pod="openshift-marketplace/redhat-operators-cft2p" Mar 20 16:58:50 crc kubenswrapper[4730]: I0320 16:58:50.586522 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a16e21d-182d-4c13-9089-49aceb2bf64e-utilities\") pod \"redhat-operators-cft2p\" (UID: \"7a16e21d-182d-4c13-9089-49aceb2bf64e\") " pod="openshift-marketplace/redhat-operators-cft2p" Mar 20 16:58:50 crc kubenswrapper[4730]: I0320 16:58:50.586970 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlklg\" (UniqueName: \"kubernetes.io/projected/7a16e21d-182d-4c13-9089-49aceb2bf64e-kube-api-access-jlklg\") pod \"redhat-operators-cft2p\" (UID: \"7a16e21d-182d-4c13-9089-49aceb2bf64e\") " pod="openshift-marketplace/redhat-operators-cft2p" Mar 20 16:58:50 crc kubenswrapper[4730]: I0320 16:58:50.587232 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a16e21d-182d-4c13-9089-49aceb2bf64e-catalog-content\") pod \"redhat-operators-cft2p\" (UID: \"7a16e21d-182d-4c13-9089-49aceb2bf64e\") " pod="openshift-marketplace/redhat-operators-cft2p" Mar 20 16:58:50 crc kubenswrapper[4730]: I0320 16:58:50.587233 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a16e21d-182d-4c13-9089-49aceb2bf64e-utilities\") pod \"redhat-operators-cft2p\" (UID: \"7a16e21d-182d-4c13-9089-49aceb2bf64e\") " pod="openshift-marketplace/redhat-operators-cft2p" Mar 20 16:58:50 crc kubenswrapper[4730]: I0320 16:58:50.587825 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a16e21d-182d-4c13-9089-49aceb2bf64e-catalog-content\") pod \"redhat-operators-cft2p\" (UID: \"7a16e21d-182d-4c13-9089-49aceb2bf64e\") " pod="openshift-marketplace/redhat-operators-cft2p" Mar 20 16:58:50 crc kubenswrapper[4730]: I0320 16:58:50.615955 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlklg\" (UniqueName: \"kubernetes.io/projected/7a16e21d-182d-4c13-9089-49aceb2bf64e-kube-api-access-jlklg\") pod \"redhat-operators-cft2p\" (UID: \"7a16e21d-182d-4c13-9089-49aceb2bf64e\") " pod="openshift-marketplace/redhat-operators-cft2p" Mar 20 16:58:50 crc kubenswrapper[4730]: I0320 16:58:50.691298 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cft2p" Mar 20 16:58:51 crc kubenswrapper[4730]: I0320 16:58:51.209638 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cft2p"] Mar 20 16:58:51 crc kubenswrapper[4730]: I0320 16:58:51.486108 4730 generic.go:334] "Generic (PLEG): container finished" podID="7a16e21d-182d-4c13-9089-49aceb2bf64e" containerID="1ecc4f0c91882002fa7f0d51f0cc4607743264255f04b66356208c63914a4c04" exitCode=0 Mar 20 16:58:51 crc kubenswrapper[4730]: I0320 16:58:51.486228 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cft2p" event={"ID":"7a16e21d-182d-4c13-9089-49aceb2bf64e","Type":"ContainerDied","Data":"1ecc4f0c91882002fa7f0d51f0cc4607743264255f04b66356208c63914a4c04"} Mar 20 16:58:51 crc kubenswrapper[4730]: I0320 16:58:51.486445 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cft2p" event={"ID":"7a16e21d-182d-4c13-9089-49aceb2bf64e","Type":"ContainerStarted","Data":"3b704111a62172b53d35ce955ecd8ad0d92ece2f9021e12eea289f8e3c5e40aa"} Mar 20 16:58:52 crc kubenswrapper[4730]: I0320 16:58:52.029845 4730 scope.go:117] "RemoveContainer" containerID="ab6be6d2c86b64d8fa302dbb64f113f58f10be72cfc3ad77f609318442cc34dd" Mar 20 16:58:52 crc kubenswrapper[4730]: I0320 16:58:52.536879 4730 scope.go:117] "RemoveContainer" containerID="96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9" Mar 20 16:58:52 crc kubenswrapper[4730]: E0320 16:58:52.537632 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:58:52 crc kubenswrapper[4730]: I0320 16:58:52.543643 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cft2p" event={"ID":"7a16e21d-182d-4c13-9089-49aceb2bf64e","Type":"ContainerStarted","Data":"fa40729ee31ad53435997da4a543c920ca7bb80d17bc2b12c39cd8960d138b6c"} Mar 20 16:58:57 crc kubenswrapper[4730]: I0320 16:58:57.595681 4730 generic.go:334] "Generic (PLEG): container finished" podID="7a16e21d-182d-4c13-9089-49aceb2bf64e" containerID="fa40729ee31ad53435997da4a543c920ca7bb80d17bc2b12c39cd8960d138b6c" exitCode=0 Mar 20 16:58:57 crc kubenswrapper[4730]: I0320 16:58:57.595758 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cft2p" event={"ID":"7a16e21d-182d-4c13-9089-49aceb2bf64e","Type":"ContainerDied","Data":"fa40729ee31ad53435997da4a543c920ca7bb80d17bc2b12c39cd8960d138b6c"} Mar 20 16:58:59 crc kubenswrapper[4730]: I0320 16:58:59.623672 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cft2p" event={"ID":"7a16e21d-182d-4c13-9089-49aceb2bf64e","Type":"ContainerStarted","Data":"7bb7ee354edd6a6ac56e90b66e1c20c758e33122d0f1e93edf3cfc7f257c473d"} Mar 20 16:58:59 crc kubenswrapper[4730]: I0320 16:58:59.646371 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cft2p" podStartSLOduration=1.8380339719999998 podStartE2EDuration="9.646355885s" podCreationTimestamp="2026-03-20 16:58:50 +0000 UTC" firstStartedPulling="2026-03-20 16:58:51.488937472 +0000 UTC m=+4790.702308841" lastFinishedPulling="2026-03-20 16:58:59.297259385 +0000 UTC m=+4798.510630754" observedRunningTime="2026-03-20 16:58:59.642545307 +0000 UTC m=+4798.855916676" watchObservedRunningTime="2026-03-20 16:58:59.646355885 +0000 UTC m=+4798.859727254" Mar 20 16:59:00 crc kubenswrapper[4730]: I0320 16:59:00.691684 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cft2p" Mar 20 16:59:00 crc kubenswrapper[4730]: I0320 16:59:00.691763 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cft2p" Mar 20 16:59:01 crc kubenswrapper[4730]: I0320 16:59:01.746373 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cft2p" podUID="7a16e21d-182d-4c13-9089-49aceb2bf64e" containerName="registry-server" probeResult="failure" output=< Mar 20 16:59:01 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Mar 20 16:59:01 crc kubenswrapper[4730]: > Mar 20 16:59:06 crc kubenswrapper[4730]: I0320 16:59:06.533884 4730 scope.go:117] "RemoveContainer" containerID="96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9" Mar 20 16:59:06 crc kubenswrapper[4730]: E0320 16:59:06.535192 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:59:11 crc kubenswrapper[4730]: I0320 16:59:11.758126 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cft2p" podUID="7a16e21d-182d-4c13-9089-49aceb2bf64e" containerName="registry-server" probeResult="failure" output=< Mar 20 16:59:11 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Mar 20 16:59:11 crc kubenswrapper[4730]: > Mar 20 16:59:18 crc kubenswrapper[4730]: I0320 16:59:18.534225 4730 scope.go:117] "RemoveContainer" containerID="96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9" Mar 20 16:59:18 crc kubenswrapper[4730]: E0320 16:59:18.535121 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:59:21 crc kubenswrapper[4730]: I0320 16:59:21.746099 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cft2p" podUID="7a16e21d-182d-4c13-9089-49aceb2bf64e" containerName="registry-server" probeResult="failure" output=< Mar 20 16:59:21 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Mar 20 16:59:21 crc kubenswrapper[4730]: > Mar 20 16:59:30 crc kubenswrapper[4730]: I0320 16:59:30.533956 4730 scope.go:117] "RemoveContainer" containerID="96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9" Mar 20 16:59:30 crc kubenswrapper[4730]: E0320 16:59:30.534983 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:59:30 crc kubenswrapper[4730]: I0320 16:59:30.770688 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cft2p" Mar 20 16:59:30 crc kubenswrapper[4730]: I0320 16:59:30.858233 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cft2p" Mar 20 16:59:31 crc kubenswrapper[4730]: I0320 16:59:31.010119 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cft2p"] Mar 20 16:59:32 crc kubenswrapper[4730]: I0320 16:59:32.011429 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cft2p" podUID="7a16e21d-182d-4c13-9089-49aceb2bf64e" containerName="registry-server" containerID="cri-o://7bb7ee354edd6a6ac56e90b66e1c20c758e33122d0f1e93edf3cfc7f257c473d" gracePeriod=2 Mar 20 16:59:32 crc kubenswrapper[4730]: I0320 16:59:32.599050 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cft2p" Mar 20 16:59:32 crc kubenswrapper[4730]: I0320 16:59:32.768035 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a16e21d-182d-4c13-9089-49aceb2bf64e-utilities\") pod \"7a16e21d-182d-4c13-9089-49aceb2bf64e\" (UID: \"7a16e21d-182d-4c13-9089-49aceb2bf64e\") " Mar 20 16:59:32 crc kubenswrapper[4730]: I0320 16:59:32.768553 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a16e21d-182d-4c13-9089-49aceb2bf64e-catalog-content\") pod \"7a16e21d-182d-4c13-9089-49aceb2bf64e\" (UID: \"7a16e21d-182d-4c13-9089-49aceb2bf64e\") " Mar 20 16:59:32 crc kubenswrapper[4730]: I0320 16:59:32.768658 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlklg\" (UniqueName: \"kubernetes.io/projected/7a16e21d-182d-4c13-9089-49aceb2bf64e-kube-api-access-jlklg\") pod \"7a16e21d-182d-4c13-9089-49aceb2bf64e\" (UID: \"7a16e21d-182d-4c13-9089-49aceb2bf64e\") " Mar 20 16:59:32 crc kubenswrapper[4730]: I0320 16:59:32.768960 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a16e21d-182d-4c13-9089-49aceb2bf64e-utilities" (OuterVolumeSpecName: "utilities") pod "7a16e21d-182d-4c13-9089-49aceb2bf64e" (UID: "7a16e21d-182d-4c13-9089-49aceb2bf64e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:59:32 crc kubenswrapper[4730]: I0320 16:59:32.769330 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a16e21d-182d-4c13-9089-49aceb2bf64e-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 16:59:32 crc kubenswrapper[4730]: I0320 16:59:32.779518 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a16e21d-182d-4c13-9089-49aceb2bf64e-kube-api-access-jlklg" (OuterVolumeSpecName: "kube-api-access-jlklg") pod "7a16e21d-182d-4c13-9089-49aceb2bf64e" (UID: "7a16e21d-182d-4c13-9089-49aceb2bf64e"). InnerVolumeSpecName "kube-api-access-jlklg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:59:32 crc kubenswrapper[4730]: I0320 16:59:32.874587 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlklg\" (UniqueName: \"kubernetes.io/projected/7a16e21d-182d-4c13-9089-49aceb2bf64e-kube-api-access-jlklg\") on node \"crc\" DevicePath \"\"" Mar 20 16:59:32 crc kubenswrapper[4730]: I0320 16:59:32.907481 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a16e21d-182d-4c13-9089-49aceb2bf64e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a16e21d-182d-4c13-9089-49aceb2bf64e" (UID: "7a16e21d-182d-4c13-9089-49aceb2bf64e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:59:32 crc kubenswrapper[4730]: I0320 16:59:32.976754 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a16e21d-182d-4c13-9089-49aceb2bf64e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 16:59:33 crc kubenswrapper[4730]: I0320 16:59:33.022102 4730 generic.go:334] "Generic (PLEG): container finished" podID="7a16e21d-182d-4c13-9089-49aceb2bf64e" containerID="7bb7ee354edd6a6ac56e90b66e1c20c758e33122d0f1e93edf3cfc7f257c473d" exitCode=0 Mar 20 16:59:33 crc kubenswrapper[4730]: I0320 16:59:33.022141 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cft2p" event={"ID":"7a16e21d-182d-4c13-9089-49aceb2bf64e","Type":"ContainerDied","Data":"7bb7ee354edd6a6ac56e90b66e1c20c758e33122d0f1e93edf3cfc7f257c473d"} Mar 20 16:59:33 crc kubenswrapper[4730]: I0320 16:59:33.022170 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cft2p" Mar 20 16:59:33 crc kubenswrapper[4730]: I0320 16:59:33.022187 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cft2p" event={"ID":"7a16e21d-182d-4c13-9089-49aceb2bf64e","Type":"ContainerDied","Data":"3b704111a62172b53d35ce955ecd8ad0d92ece2f9021e12eea289f8e3c5e40aa"} Mar 20 16:59:33 crc kubenswrapper[4730]: I0320 16:59:33.022204 4730 scope.go:117] "RemoveContainer" containerID="7bb7ee354edd6a6ac56e90b66e1c20c758e33122d0f1e93edf3cfc7f257c473d" Mar 20 16:59:33 crc kubenswrapper[4730]: I0320 16:59:33.056129 4730 scope.go:117] "RemoveContainer" containerID="fa40729ee31ad53435997da4a543c920ca7bb80d17bc2b12c39cd8960d138b6c" Mar 20 16:59:33 crc kubenswrapper[4730]: I0320 16:59:33.069217 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cft2p"] Mar 20 16:59:33 crc kubenswrapper[4730]: I0320 16:59:33.078973 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cft2p"] Mar 20 16:59:33 crc kubenswrapper[4730]: I0320 16:59:33.084336 4730 scope.go:117] "RemoveContainer" containerID="1ecc4f0c91882002fa7f0d51f0cc4607743264255f04b66356208c63914a4c04" Mar 20 16:59:33 crc kubenswrapper[4730]: I0320 16:59:33.127387 4730 scope.go:117] "RemoveContainer" containerID="7bb7ee354edd6a6ac56e90b66e1c20c758e33122d0f1e93edf3cfc7f257c473d" Mar 20 16:59:33 crc kubenswrapper[4730]: E0320 16:59:33.127753 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bb7ee354edd6a6ac56e90b66e1c20c758e33122d0f1e93edf3cfc7f257c473d\": container with ID starting with 7bb7ee354edd6a6ac56e90b66e1c20c758e33122d0f1e93edf3cfc7f257c473d not found: ID does not exist" containerID="7bb7ee354edd6a6ac56e90b66e1c20c758e33122d0f1e93edf3cfc7f257c473d" Mar 20 16:59:33 crc kubenswrapper[4730]: I0320 16:59:33.127785 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bb7ee354edd6a6ac56e90b66e1c20c758e33122d0f1e93edf3cfc7f257c473d"} err="failed to get container status \"7bb7ee354edd6a6ac56e90b66e1c20c758e33122d0f1e93edf3cfc7f257c473d\": rpc error: code = NotFound desc = could not find container \"7bb7ee354edd6a6ac56e90b66e1c20c758e33122d0f1e93edf3cfc7f257c473d\": container with ID starting with 7bb7ee354edd6a6ac56e90b66e1c20c758e33122d0f1e93edf3cfc7f257c473d not found: ID does not exist" Mar 20 16:59:33 crc kubenswrapper[4730]: I0320 16:59:33.127809 4730 scope.go:117] "RemoveContainer" containerID="fa40729ee31ad53435997da4a543c920ca7bb80d17bc2b12c39cd8960d138b6c" Mar 20 16:59:33 crc kubenswrapper[4730]: E0320 16:59:33.128242 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa40729ee31ad53435997da4a543c920ca7bb80d17bc2b12c39cd8960d138b6c\": container with ID starting with fa40729ee31ad53435997da4a543c920ca7bb80d17bc2b12c39cd8960d138b6c not found: ID does not exist" containerID="fa40729ee31ad53435997da4a543c920ca7bb80d17bc2b12c39cd8960d138b6c" Mar 20 16:59:33 crc kubenswrapper[4730]: I0320 16:59:33.128315 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa40729ee31ad53435997da4a543c920ca7bb80d17bc2b12c39cd8960d138b6c"} err="failed to get container status \"fa40729ee31ad53435997da4a543c920ca7bb80d17bc2b12c39cd8960d138b6c\": rpc error: code = NotFound desc = could not find container \"fa40729ee31ad53435997da4a543c920ca7bb80d17bc2b12c39cd8960d138b6c\": container with ID starting with fa40729ee31ad53435997da4a543c920ca7bb80d17bc2b12c39cd8960d138b6c not found: ID does not exist" Mar 20 16:59:33 crc kubenswrapper[4730]: I0320 16:59:33.128369 4730 scope.go:117] "RemoveContainer" containerID="1ecc4f0c91882002fa7f0d51f0cc4607743264255f04b66356208c63914a4c04" Mar 20 16:59:33 crc kubenswrapper[4730]: E0320 16:59:33.128728 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ecc4f0c91882002fa7f0d51f0cc4607743264255f04b66356208c63914a4c04\": container with ID starting with 1ecc4f0c91882002fa7f0d51f0cc4607743264255f04b66356208c63914a4c04 not found: ID does not exist" containerID="1ecc4f0c91882002fa7f0d51f0cc4607743264255f04b66356208c63914a4c04" Mar 20 16:59:33 crc kubenswrapper[4730]: I0320 16:59:33.128776 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ecc4f0c91882002fa7f0d51f0cc4607743264255f04b66356208c63914a4c04"} err="failed to get container status \"1ecc4f0c91882002fa7f0d51f0cc4607743264255f04b66356208c63914a4c04\": rpc error: code = NotFound desc = could not find container \"1ecc4f0c91882002fa7f0d51f0cc4607743264255f04b66356208c63914a4c04\": container with ID starting with 1ecc4f0c91882002fa7f0d51f0cc4607743264255f04b66356208c63914a4c04 not found: ID does not exist" Mar 20 16:59:33 crc kubenswrapper[4730]: I0320 16:59:33.547108 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a16e21d-182d-4c13-9089-49aceb2bf64e" path="/var/lib/kubelet/pods/7a16e21d-182d-4c13-9089-49aceb2bf64e/volumes" Mar 20 16:59:39 crc kubenswrapper[4730]: I0320 16:59:39.158325 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r6zkc"] Mar 20 16:59:39 crc kubenswrapper[4730]: E0320 16:59:39.159533 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a16e21d-182d-4c13-9089-49aceb2bf64e" containerName="extract-content" Mar 20 16:59:39 crc kubenswrapper[4730]: I0320 16:59:39.159557 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a16e21d-182d-4c13-9089-49aceb2bf64e" containerName="extract-content" Mar 20 16:59:39 crc kubenswrapper[4730]: E0320 16:59:39.159576 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a16e21d-182d-4c13-9089-49aceb2bf64e" containerName="extract-utilities" Mar 20 16:59:39 crc kubenswrapper[4730]: I0320 16:59:39.159589 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a16e21d-182d-4c13-9089-49aceb2bf64e" containerName="extract-utilities" Mar 20 16:59:39 crc kubenswrapper[4730]: E0320 16:59:39.159640 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a16e21d-182d-4c13-9089-49aceb2bf64e" containerName="registry-server" Mar 20 16:59:39 crc kubenswrapper[4730]: I0320 16:59:39.159652 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a16e21d-182d-4c13-9089-49aceb2bf64e" containerName="registry-server" Mar 20 16:59:39 crc kubenswrapper[4730]: I0320 16:59:39.159970 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a16e21d-182d-4c13-9089-49aceb2bf64e" containerName="registry-server" Mar 20 16:59:39 crc kubenswrapper[4730]: I0320 16:59:39.162841 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r6zkc" Mar 20 16:59:39 crc kubenswrapper[4730]: I0320 16:59:39.195295 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r6zkc"] Mar 20 16:59:39 crc kubenswrapper[4730]: I0320 16:59:39.326441 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94ht5\" (UniqueName: \"kubernetes.io/projected/b1f9169d-1b10-4fa9-a78c-8d0629059181-kube-api-access-94ht5\") pod \"certified-operators-r6zkc\" (UID: \"b1f9169d-1b10-4fa9-a78c-8d0629059181\") " pod="openshift-marketplace/certified-operators-r6zkc" Mar 20 16:59:39 crc kubenswrapper[4730]: I0320 16:59:39.326501 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1f9169d-1b10-4fa9-a78c-8d0629059181-utilities\") pod \"certified-operators-r6zkc\" (UID: \"b1f9169d-1b10-4fa9-a78c-8d0629059181\") " pod="openshift-marketplace/certified-operators-r6zkc" Mar 20 16:59:39 crc kubenswrapper[4730]: I0320 16:59:39.326641 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1f9169d-1b10-4fa9-a78c-8d0629059181-catalog-content\") pod \"certified-operators-r6zkc\" (UID: \"b1f9169d-1b10-4fa9-a78c-8d0629059181\") " pod="openshift-marketplace/certified-operators-r6zkc" Mar 20 16:59:39 crc kubenswrapper[4730]: I0320 16:59:39.428046 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1f9169d-1b10-4fa9-a78c-8d0629059181-catalog-content\") pod \"certified-operators-r6zkc\" (UID: \"b1f9169d-1b10-4fa9-a78c-8d0629059181\") " pod="openshift-marketplace/certified-operators-r6zkc" Mar 20 16:59:39 crc kubenswrapper[4730]: I0320 16:59:39.428115 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94ht5\" (UniqueName: \"kubernetes.io/projected/b1f9169d-1b10-4fa9-a78c-8d0629059181-kube-api-access-94ht5\") pod \"certified-operators-r6zkc\" (UID: \"b1f9169d-1b10-4fa9-a78c-8d0629059181\") " pod="openshift-marketplace/certified-operators-r6zkc" Mar 20 16:59:39 crc kubenswrapper[4730]: I0320 16:59:39.428156 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1f9169d-1b10-4fa9-a78c-8d0629059181-utilities\") pod \"certified-operators-r6zkc\" (UID: \"b1f9169d-1b10-4fa9-a78c-8d0629059181\") " pod="openshift-marketplace/certified-operators-r6zkc" Mar 20 16:59:39 crc kubenswrapper[4730]: I0320 16:59:39.428560 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1f9169d-1b10-4fa9-a78c-8d0629059181-catalog-content\") pod \"certified-operators-r6zkc\" (UID: \"b1f9169d-1b10-4fa9-a78c-8d0629059181\") " pod="openshift-marketplace/certified-operators-r6zkc" Mar 20 16:59:39 crc kubenswrapper[4730]: I0320 16:59:39.428655 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1f9169d-1b10-4fa9-a78c-8d0629059181-utilities\") pod \"certified-operators-r6zkc\" (UID: \"b1f9169d-1b10-4fa9-a78c-8d0629059181\") " pod="openshift-marketplace/certified-operators-r6zkc" Mar 20 16:59:39 crc kubenswrapper[4730]: I0320 16:59:39.448416 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94ht5\" (UniqueName: \"kubernetes.io/projected/b1f9169d-1b10-4fa9-a78c-8d0629059181-kube-api-access-94ht5\") pod \"certified-operators-r6zkc\" (UID: \"b1f9169d-1b10-4fa9-a78c-8d0629059181\") " pod="openshift-marketplace/certified-operators-r6zkc" Mar 20 16:59:39 crc kubenswrapper[4730]: I0320 16:59:39.495381 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r6zkc" Mar 20 16:59:40 crc kubenswrapper[4730]: I0320 16:59:40.026454 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r6zkc"] Mar 20 16:59:40 crc kubenswrapper[4730]: I0320 16:59:40.111126 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6zkc" event={"ID":"b1f9169d-1b10-4fa9-a78c-8d0629059181","Type":"ContainerStarted","Data":"ddb7ee25b4b232823e91d59af40a6a8a917740f268d360194245e482dd0fbdce"} Mar 20 16:59:41 crc kubenswrapper[4730]: I0320 16:59:41.121821 4730 generic.go:334] "Generic (PLEG): container finished" podID="b1f9169d-1b10-4fa9-a78c-8d0629059181" containerID="e93f916c76f0c714a78d5f1db0c26a9a99a0af0d21ca7e5208e0a5cdff340fcc" exitCode=0 Mar 20 16:59:41 crc kubenswrapper[4730]: I0320 16:59:41.123299 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6zkc" event={"ID":"b1f9169d-1b10-4fa9-a78c-8d0629059181","Type":"ContainerDied","Data":"e93f916c76f0c714a78d5f1db0c26a9a99a0af0d21ca7e5208e0a5cdff340fcc"} Mar 20 16:59:42 crc kubenswrapper[4730]: I0320 16:59:42.135403 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6zkc" event={"ID":"b1f9169d-1b10-4fa9-a78c-8d0629059181","Type":"ContainerStarted","Data":"fc7a56139b1dc20f843850c6ed730e68fb5324820a378dd38eb40e9f321e26b3"} Mar 20 16:59:43 crc kubenswrapper[4730]: I0320 16:59:43.533640 4730 scope.go:117] "RemoveContainer" containerID="96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9" Mar 20 16:59:43 crc kubenswrapper[4730]: E0320 16:59:43.534730 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 16:59:44 crc kubenswrapper[4730]: I0320 16:59:44.157007 4730 generic.go:334] "Generic (PLEG): container finished" podID="b1f9169d-1b10-4fa9-a78c-8d0629059181" containerID="fc7a56139b1dc20f843850c6ed730e68fb5324820a378dd38eb40e9f321e26b3" exitCode=0 Mar 20 16:59:44 crc kubenswrapper[4730]: I0320 16:59:44.157083 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6zkc" event={"ID":"b1f9169d-1b10-4fa9-a78c-8d0629059181","Type":"ContainerDied","Data":"fc7a56139b1dc20f843850c6ed730e68fb5324820a378dd38eb40e9f321e26b3"} Mar 20 16:59:45 crc kubenswrapper[4730]: I0320 16:59:45.171970 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6zkc" event={"ID":"b1f9169d-1b10-4fa9-a78c-8d0629059181","Type":"ContainerStarted","Data":"f9cb933b2bbced58f72d1e703311eab42c9f076e0b3d7d1b5a991d19fbebe31b"} Mar 20 16:59:45 crc kubenswrapper[4730]: I0320 16:59:45.206708 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r6zkc" podStartSLOduration=2.7460172099999998 podStartE2EDuration="6.20666459s" podCreationTimestamp="2026-03-20 16:59:39 +0000 UTC" firstStartedPulling="2026-03-20 16:59:41.123444726 +0000 UTC m=+4840.336816095" lastFinishedPulling="2026-03-20 16:59:44.584092116 +0000 UTC m=+4843.797463475" observedRunningTime="2026-03-20 16:59:45.196578094 +0000 UTC m=+4844.409949483" watchObservedRunningTime="2026-03-20 16:59:45.20666459 +0000 UTC m=+4844.420035959" Mar 20 16:59:49 crc kubenswrapper[4730]: I0320 16:59:49.495877 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r6zkc" Mar 20 16:59:49 crc kubenswrapper[4730]: I0320 16:59:49.496542 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r6zkc" Mar 20 16:59:49 crc kubenswrapper[4730]: I0320 16:59:49.553997 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r6zkc" Mar 20 16:59:50 crc kubenswrapper[4730]: I0320 16:59:50.318938 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r6zkc" Mar 20 16:59:50 crc kubenswrapper[4730]: I0320 16:59:50.603760 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r6zkc"] Mar 20 16:59:52 crc kubenswrapper[4730]: I0320 16:59:52.247945 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r6zkc" podUID="b1f9169d-1b10-4fa9-a78c-8d0629059181" containerName="registry-server" containerID="cri-o://f9cb933b2bbced58f72d1e703311eab42c9f076e0b3d7d1b5a991d19fbebe31b" gracePeriod=2 Mar 20 16:59:52 crc kubenswrapper[4730]: I0320 16:59:52.788360 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r6zkc" Mar 20 16:59:52 crc kubenswrapper[4730]: I0320 16:59:52.834944 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94ht5\" (UniqueName: \"kubernetes.io/projected/b1f9169d-1b10-4fa9-a78c-8d0629059181-kube-api-access-94ht5\") pod \"b1f9169d-1b10-4fa9-a78c-8d0629059181\" (UID: \"b1f9169d-1b10-4fa9-a78c-8d0629059181\") " Mar 20 16:59:52 crc kubenswrapper[4730]: I0320 16:59:52.835088 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1f9169d-1b10-4fa9-a78c-8d0629059181-catalog-content\") pod \"b1f9169d-1b10-4fa9-a78c-8d0629059181\" (UID: \"b1f9169d-1b10-4fa9-a78c-8d0629059181\") " Mar 20 16:59:52 crc kubenswrapper[4730]: I0320 16:59:52.835185 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1f9169d-1b10-4fa9-a78c-8d0629059181-utilities\") pod \"b1f9169d-1b10-4fa9-a78c-8d0629059181\" (UID: \"b1f9169d-1b10-4fa9-a78c-8d0629059181\") " Mar 20 16:59:52 crc kubenswrapper[4730]: I0320 16:59:52.837284 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1f9169d-1b10-4fa9-a78c-8d0629059181-utilities" (OuterVolumeSpecName: "utilities") pod "b1f9169d-1b10-4fa9-a78c-8d0629059181" (UID: "b1f9169d-1b10-4fa9-a78c-8d0629059181"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:59:52 crc kubenswrapper[4730]: I0320 16:59:52.846186 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1f9169d-1b10-4fa9-a78c-8d0629059181-kube-api-access-94ht5" (OuterVolumeSpecName: "kube-api-access-94ht5") pod "b1f9169d-1b10-4fa9-a78c-8d0629059181" (UID: "b1f9169d-1b10-4fa9-a78c-8d0629059181"). InnerVolumeSpecName "kube-api-access-94ht5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 16:59:52 crc kubenswrapper[4730]: I0320 16:59:52.925075 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1f9169d-1b10-4fa9-a78c-8d0629059181-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1f9169d-1b10-4fa9-a78c-8d0629059181" (UID: "b1f9169d-1b10-4fa9-a78c-8d0629059181"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 16:59:52 crc kubenswrapper[4730]: I0320 16:59:52.939799 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1f9169d-1b10-4fa9-a78c-8d0629059181-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 16:59:52 crc kubenswrapper[4730]: I0320 16:59:52.939843 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94ht5\" (UniqueName: \"kubernetes.io/projected/b1f9169d-1b10-4fa9-a78c-8d0629059181-kube-api-access-94ht5\") on node \"crc\" DevicePath \"\"" Mar 20 16:59:52 crc kubenswrapper[4730]: I0320 16:59:52.939858 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1f9169d-1b10-4fa9-a78c-8d0629059181-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 16:59:53 crc kubenswrapper[4730]: I0320 16:59:53.260427 4730 generic.go:334] "Generic (PLEG): container finished" podID="b1f9169d-1b10-4fa9-a78c-8d0629059181" containerID="f9cb933b2bbced58f72d1e703311eab42c9f076e0b3d7d1b5a991d19fbebe31b" exitCode=0 Mar 20 16:59:53 crc kubenswrapper[4730]: I0320 16:59:53.260503 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6zkc" event={"ID":"b1f9169d-1b10-4fa9-a78c-8d0629059181","Type":"ContainerDied","Data":"f9cb933b2bbced58f72d1e703311eab42c9f076e0b3d7d1b5a991d19fbebe31b"} Mar 20 16:59:53 crc kubenswrapper[4730]: I0320 16:59:53.260532 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r6zkc" Mar 20 16:59:53 crc kubenswrapper[4730]: I0320 16:59:53.260564 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6zkc" event={"ID":"b1f9169d-1b10-4fa9-a78c-8d0629059181","Type":"ContainerDied","Data":"ddb7ee25b4b232823e91d59af40a6a8a917740f268d360194245e482dd0fbdce"} Mar 20 16:59:53 crc kubenswrapper[4730]: I0320 16:59:53.260595 4730 scope.go:117] "RemoveContainer" containerID="f9cb933b2bbced58f72d1e703311eab42c9f076e0b3d7d1b5a991d19fbebe31b" Mar 20 16:59:53 crc kubenswrapper[4730]: I0320 16:59:53.281561 4730 scope.go:117] "RemoveContainer" containerID="fc7a56139b1dc20f843850c6ed730e68fb5324820a378dd38eb40e9f321e26b3" Mar 20 16:59:53 crc kubenswrapper[4730]: I0320 16:59:53.318978 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r6zkc"] Mar 20 16:59:53 crc kubenswrapper[4730]: I0320 16:59:53.330242 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r6zkc"] Mar 20 16:59:53 crc kubenswrapper[4730]: I0320 16:59:53.332949 4730 scope.go:117] "RemoveContainer" containerID="e93f916c76f0c714a78d5f1db0c26a9a99a0af0d21ca7e5208e0a5cdff340fcc" Mar 20 16:59:53 crc kubenswrapper[4730]: I0320 16:59:53.377150 4730 scope.go:117] "RemoveContainer" containerID="f9cb933b2bbced58f72d1e703311eab42c9f076e0b3d7d1b5a991d19fbebe31b" Mar 20 16:59:53 crc kubenswrapper[4730]: E0320 16:59:53.377761 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9cb933b2bbced58f72d1e703311eab42c9f076e0b3d7d1b5a991d19fbebe31b\": container with ID starting with f9cb933b2bbced58f72d1e703311eab42c9f076e0b3d7d1b5a991d19fbebe31b not found: ID does not exist" containerID="f9cb933b2bbced58f72d1e703311eab42c9f076e0b3d7d1b5a991d19fbebe31b" Mar 20 16:59:53 crc kubenswrapper[4730]: I0320 16:59:53.377803 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9cb933b2bbced58f72d1e703311eab42c9f076e0b3d7d1b5a991d19fbebe31b"} err="failed to get container status \"f9cb933b2bbced58f72d1e703311eab42c9f076e0b3d7d1b5a991d19fbebe31b\": rpc error: code = NotFound desc = could not find container \"f9cb933b2bbced58f72d1e703311eab42c9f076e0b3d7d1b5a991d19fbebe31b\": container with ID starting with f9cb933b2bbced58f72d1e703311eab42c9f076e0b3d7d1b5a991d19fbebe31b not found: ID does not exist" Mar 20 16:59:53 crc kubenswrapper[4730]: I0320 16:59:53.377828 4730 scope.go:117] "RemoveContainer" containerID="fc7a56139b1dc20f843850c6ed730e68fb5324820a378dd38eb40e9f321e26b3" Mar 20 16:59:53 crc kubenswrapper[4730]: E0320 16:59:53.378242 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc7a56139b1dc20f843850c6ed730e68fb5324820a378dd38eb40e9f321e26b3\": container with ID starting with fc7a56139b1dc20f843850c6ed730e68fb5324820a378dd38eb40e9f321e26b3 not found: ID does not exist" containerID="fc7a56139b1dc20f843850c6ed730e68fb5324820a378dd38eb40e9f321e26b3" Mar 20 16:59:53 crc kubenswrapper[4730]: I0320 16:59:53.378310 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc7a56139b1dc20f843850c6ed730e68fb5324820a378dd38eb40e9f321e26b3"} err="failed to get container status \"fc7a56139b1dc20f843850c6ed730e68fb5324820a378dd38eb40e9f321e26b3\": rpc error: code = NotFound desc = could not find container \"fc7a56139b1dc20f843850c6ed730e68fb5324820a378dd38eb40e9f321e26b3\": container with ID starting with fc7a56139b1dc20f843850c6ed730e68fb5324820a378dd38eb40e9f321e26b3 not found: ID does not exist" Mar 20 16:59:53 crc kubenswrapper[4730]: I0320 16:59:53.378336 4730 scope.go:117] "RemoveContainer" containerID="e93f916c76f0c714a78d5f1db0c26a9a99a0af0d21ca7e5208e0a5cdff340fcc" Mar 20 16:59:53 crc kubenswrapper[4730]: E0320 16:59:53.378726 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e93f916c76f0c714a78d5f1db0c26a9a99a0af0d21ca7e5208e0a5cdff340fcc\": container with ID starting with e93f916c76f0c714a78d5f1db0c26a9a99a0af0d21ca7e5208e0a5cdff340fcc not found: ID does not exist" containerID="e93f916c76f0c714a78d5f1db0c26a9a99a0af0d21ca7e5208e0a5cdff340fcc" Mar 20 16:59:53 crc kubenswrapper[4730]: I0320 16:59:53.378755 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e93f916c76f0c714a78d5f1db0c26a9a99a0af0d21ca7e5208e0a5cdff340fcc"} err="failed to get container status \"e93f916c76f0c714a78d5f1db0c26a9a99a0af0d21ca7e5208e0a5cdff340fcc\": rpc error: code = NotFound desc = could not find container \"e93f916c76f0c714a78d5f1db0c26a9a99a0af0d21ca7e5208e0a5cdff340fcc\": container with ID starting with e93f916c76f0c714a78d5f1db0c26a9a99a0af0d21ca7e5208e0a5cdff340fcc not found: ID does not exist" Mar 20 16:59:53 crc kubenswrapper[4730]: I0320 16:59:53.546636 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1f9169d-1b10-4fa9-a78c-8d0629059181" path="/var/lib/kubelet/pods/b1f9169d-1b10-4fa9-a78c-8d0629059181/volumes" Mar 20 16:59:55 crc kubenswrapper[4730]: I0320 16:59:55.533922 4730 scope.go:117] "RemoveContainer" containerID="96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9" Mar 20 16:59:55 crc kubenswrapper[4730]: E0320 16:59:55.534880 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.171400 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567100-vgj2v"] Mar 20 17:00:00 crc kubenswrapper[4730]: E0320 17:00:00.172138 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1f9169d-1b10-4fa9-a78c-8d0629059181" containerName="registry-server" Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.172153 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1f9169d-1b10-4fa9-a78c-8d0629059181" containerName="registry-server" Mar 20 17:00:00 crc kubenswrapper[4730]: E0320 17:00:00.172191 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1f9169d-1b10-4fa9-a78c-8d0629059181" containerName="extract-utilities" Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.172199 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1f9169d-1b10-4fa9-a78c-8d0629059181" containerName="extract-utilities" Mar 20 17:00:00 crc kubenswrapper[4730]: E0320 17:00:00.172214 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1f9169d-1b10-4fa9-a78c-8d0629059181" containerName="extract-content" Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.172222 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1f9169d-1b10-4fa9-a78c-8d0629059181" containerName="extract-content" Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.172465 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1f9169d-1b10-4fa9-a78c-8d0629059181" containerName="registry-server" Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.173241 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567100-vgj2v" Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.176222 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.176380 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.178989 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.192826 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567100-q7w74"] Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.194589 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567100-q7w74" Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.197452 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.199547 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.203345 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567100-vgj2v"] Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.205074 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvkzv\" (UniqueName: \"kubernetes.io/projected/1293fe12-0f59-44fb-b726-9d72c790dabd-kube-api-access-rvkzv\") pod \"auto-csr-approver-29567100-vgj2v\" (UID: \"1293fe12-0f59-44fb-b726-9d72c790dabd\") " pod="openshift-infra/auto-csr-approver-29567100-vgj2v" Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.229913 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567100-q7w74"] Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.307348 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85790076-e9c5-4a47-9a91-23e3371238e0-config-volume\") pod \"collect-profiles-29567100-q7w74\" (UID: \"85790076-e9c5-4a47-9a91-23e3371238e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567100-q7w74" Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.307662 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85790076-e9c5-4a47-9a91-23e3371238e0-secret-volume\") pod \"collect-profiles-29567100-q7w74\" (UID: \"85790076-e9c5-4a47-9a91-23e3371238e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567100-q7w74" Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.307729 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvkzv\" (UniqueName: \"kubernetes.io/projected/1293fe12-0f59-44fb-b726-9d72c790dabd-kube-api-access-rvkzv\") pod \"auto-csr-approver-29567100-vgj2v\" (UID: \"1293fe12-0f59-44fb-b726-9d72c790dabd\") " pod="openshift-infra/auto-csr-approver-29567100-vgj2v" Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.307796 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt95r\" (UniqueName: \"kubernetes.io/projected/85790076-e9c5-4a47-9a91-23e3371238e0-kube-api-access-jt95r\") pod \"collect-profiles-29567100-q7w74\" (UID: \"85790076-e9c5-4a47-9a91-23e3371238e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567100-q7w74" Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.330166 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvkzv\" (UniqueName: \"kubernetes.io/projected/1293fe12-0f59-44fb-b726-9d72c790dabd-kube-api-access-rvkzv\") pod \"auto-csr-approver-29567100-vgj2v\" (UID: \"1293fe12-0f59-44fb-b726-9d72c790dabd\") " pod="openshift-infra/auto-csr-approver-29567100-vgj2v" Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.410068 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85790076-e9c5-4a47-9a91-23e3371238e0-secret-volume\") pod \"collect-profiles-29567100-q7w74\" (UID: \"85790076-e9c5-4a47-9a91-23e3371238e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567100-q7w74" Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.410202 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt95r\" (UniqueName: \"kubernetes.io/projected/85790076-e9c5-4a47-9a91-23e3371238e0-kube-api-access-jt95r\") pod \"collect-profiles-29567100-q7w74\" (UID: \"85790076-e9c5-4a47-9a91-23e3371238e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567100-q7w74" Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.410407 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85790076-e9c5-4a47-9a91-23e3371238e0-config-volume\") pod \"collect-profiles-29567100-q7w74\" (UID: \"85790076-e9c5-4a47-9a91-23e3371238e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567100-q7w74" Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.411991 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85790076-e9c5-4a47-9a91-23e3371238e0-config-volume\") pod \"collect-profiles-29567100-q7w74\" (UID: \"85790076-e9c5-4a47-9a91-23e3371238e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567100-q7w74" Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.414597 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85790076-e9c5-4a47-9a91-23e3371238e0-secret-volume\") pod \"collect-profiles-29567100-q7w74\" (UID: \"85790076-e9c5-4a47-9a91-23e3371238e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567100-q7w74" Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.436048 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt95r\" (UniqueName: \"kubernetes.io/projected/85790076-e9c5-4a47-9a91-23e3371238e0-kube-api-access-jt95r\") pod \"collect-profiles-29567100-q7w74\" (UID: \"85790076-e9c5-4a47-9a91-23e3371238e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567100-q7w74" Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.499594 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567100-vgj2v" Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.523388 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567100-q7w74" Mar 20 17:00:00 crc kubenswrapper[4730]: I0320 17:00:00.989623 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567100-vgj2v"] Mar 20 17:00:01 crc kubenswrapper[4730]: W0320 17:00:01.075544 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85790076_e9c5_4a47_9a91_23e3371238e0.slice/crio-53df402fb46fe245b14079dfe719c4b3cda288137d5920adfef0e389c3a057f1 WatchSource:0}: Error finding container 53df402fb46fe245b14079dfe719c4b3cda288137d5920adfef0e389c3a057f1: Status 404 returned error can't find the container with id 53df402fb46fe245b14079dfe719c4b3cda288137d5920adfef0e389c3a057f1 Mar 20 17:00:01 crc kubenswrapper[4730]: I0320 17:00:01.075956 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567100-q7w74"] Mar 20 17:00:01 crc kubenswrapper[4730]: I0320 17:00:01.359511 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567100-q7w74" event={"ID":"85790076-e9c5-4a47-9a91-23e3371238e0","Type":"ContainerStarted","Data":"98a696c1bfd3c064c3dfdf3b917dd5513f01acf718b3d9511f52bd8373893cf2"} Mar 20 17:00:01 crc kubenswrapper[4730]: I0320 17:00:01.359562 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567100-q7w74" event={"ID":"85790076-e9c5-4a47-9a91-23e3371238e0","Type":"ContainerStarted","Data":"53df402fb46fe245b14079dfe719c4b3cda288137d5920adfef0e389c3a057f1"} Mar 20 17:00:01 crc kubenswrapper[4730]: I0320 17:00:01.361747 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567100-vgj2v" event={"ID":"1293fe12-0f59-44fb-b726-9d72c790dabd","Type":"ContainerStarted","Data":"e1c9976bc45f87566b1b1016223c485c739f36677650be579d93b3378ac982b4"} Mar 20 17:00:01 crc kubenswrapper[4730]: I0320 17:00:01.383495 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29567100-q7w74" podStartSLOduration=1.383475043 podStartE2EDuration="1.383475043s" podCreationTimestamp="2026-03-20 17:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 17:00:01.379726607 +0000 UTC m=+4860.593097986" watchObservedRunningTime="2026-03-20 17:00:01.383475043 +0000 UTC m=+4860.596846412" Mar 20 17:00:02 crc kubenswrapper[4730]: I0320 17:00:02.373316 4730 generic.go:334] "Generic (PLEG): container finished" podID="85790076-e9c5-4a47-9a91-23e3371238e0" containerID="98a696c1bfd3c064c3dfdf3b917dd5513f01acf718b3d9511f52bd8373893cf2" exitCode=0 Mar 20 17:00:02 crc kubenswrapper[4730]: I0320 17:00:02.373361 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567100-q7w74" event={"ID":"85790076-e9c5-4a47-9a91-23e3371238e0","Type":"ContainerDied","Data":"98a696c1bfd3c064c3dfdf3b917dd5513f01acf718b3d9511f52bd8373893cf2"} Mar 20 17:00:03 crc kubenswrapper[4730]: I0320 17:00:03.846467 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567100-q7w74" Mar 20 17:00:03 crc kubenswrapper[4730]: I0320 17:00:03.891589 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt95r\" (UniqueName: \"kubernetes.io/projected/85790076-e9c5-4a47-9a91-23e3371238e0-kube-api-access-jt95r\") pod \"85790076-e9c5-4a47-9a91-23e3371238e0\" (UID: \"85790076-e9c5-4a47-9a91-23e3371238e0\") " Mar 20 17:00:03 crc kubenswrapper[4730]: I0320 17:00:03.891762 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85790076-e9c5-4a47-9a91-23e3371238e0-config-volume\") pod \"85790076-e9c5-4a47-9a91-23e3371238e0\" (UID: \"85790076-e9c5-4a47-9a91-23e3371238e0\") " Mar 20 17:00:03 crc kubenswrapper[4730]: I0320 17:00:03.891838 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85790076-e9c5-4a47-9a91-23e3371238e0-secret-volume\") pod \"85790076-e9c5-4a47-9a91-23e3371238e0\" (UID: \"85790076-e9c5-4a47-9a91-23e3371238e0\") " Mar 20 17:00:03 crc kubenswrapper[4730]: I0320 17:00:03.892150 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85790076-e9c5-4a47-9a91-23e3371238e0-config-volume" (OuterVolumeSpecName: "config-volume") pod "85790076-e9c5-4a47-9a91-23e3371238e0" (UID: "85790076-e9c5-4a47-9a91-23e3371238e0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 17:00:03 crc kubenswrapper[4730]: I0320 17:00:03.892319 4730 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85790076-e9c5-4a47-9a91-23e3371238e0-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 17:00:03 crc kubenswrapper[4730]: I0320 17:00:03.905476 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85790076-e9c5-4a47-9a91-23e3371238e0-kube-api-access-jt95r" (OuterVolumeSpecName: "kube-api-access-jt95r") pod "85790076-e9c5-4a47-9a91-23e3371238e0" (UID: "85790076-e9c5-4a47-9a91-23e3371238e0"). InnerVolumeSpecName "kube-api-access-jt95r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 17:00:03 crc kubenswrapper[4730]: I0320 17:00:03.905839 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85790076-e9c5-4a47-9a91-23e3371238e0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "85790076-e9c5-4a47-9a91-23e3371238e0" (UID: "85790076-e9c5-4a47-9a91-23e3371238e0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 17:00:03 crc kubenswrapper[4730]: I0320 17:00:03.994701 4730 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85790076-e9c5-4a47-9a91-23e3371238e0-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 17:00:03 crc kubenswrapper[4730]: I0320 17:00:03.994732 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt95r\" (UniqueName: \"kubernetes.io/projected/85790076-e9c5-4a47-9a91-23e3371238e0-kube-api-access-jt95r\") on node \"crc\" DevicePath \"\"" Mar 20 17:00:04 crc kubenswrapper[4730]: I0320 17:00:04.398220 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567100-q7w74" event={"ID":"85790076-e9c5-4a47-9a91-23e3371238e0","Type":"ContainerDied","Data":"53df402fb46fe245b14079dfe719c4b3cda288137d5920adfef0e389c3a057f1"} Mar 20 17:00:04 crc kubenswrapper[4730]: I0320 17:00:04.398272 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53df402fb46fe245b14079dfe719c4b3cda288137d5920adfef0e389c3a057f1" Mar 20 17:00:04 crc kubenswrapper[4730]: I0320 17:00:04.398330 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567100-q7w74" Mar 20 17:00:04 crc kubenswrapper[4730]: I0320 17:00:04.471393 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567055-r9s4z"] Mar 20 17:00:04 crc kubenswrapper[4730]: I0320 17:00:04.482136 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567055-r9s4z"] Mar 20 17:00:05 crc kubenswrapper[4730]: I0320 17:00:05.554696 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5af1b002-c577-4334-8304-5f44a67a5119" path="/var/lib/kubelet/pods/5af1b002-c577-4334-8304-5f44a67a5119/volumes" Mar 20 17:00:09 crc kubenswrapper[4730]: I0320 17:00:09.533487 4730 scope.go:117] "RemoveContainer" containerID="96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9" Mar 20 17:00:09 crc kubenswrapper[4730]: E0320 17:00:09.534363 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:00:13 crc kubenswrapper[4730]: I0320 17:00:13.541788 4730 generic.go:334] "Generic (PLEG): container finished" podID="1293fe12-0f59-44fb-b726-9d72c790dabd" containerID="f0d02d12d3b8583d27ef06f8d4e4230e6d9bdedae9fb10c5b6dcf9c218e3e2d5" exitCode=0 Mar 20 17:00:13 crc kubenswrapper[4730]: I0320 17:00:13.575923 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567100-vgj2v" event={"ID":"1293fe12-0f59-44fb-b726-9d72c790dabd","Type":"ContainerDied","Data":"f0d02d12d3b8583d27ef06f8d4e4230e6d9bdedae9fb10c5b6dcf9c218e3e2d5"} Mar 20 17:00:14 crc kubenswrapper[4730]: I0320 17:00:14.973467 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567100-vgj2v" Mar 20 17:00:15 crc kubenswrapper[4730]: I0320 17:00:15.050871 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvkzv\" (UniqueName: \"kubernetes.io/projected/1293fe12-0f59-44fb-b726-9d72c790dabd-kube-api-access-rvkzv\") pod \"1293fe12-0f59-44fb-b726-9d72c790dabd\" (UID: \"1293fe12-0f59-44fb-b726-9d72c790dabd\") " Mar 20 17:00:15 crc kubenswrapper[4730]: I0320 17:00:15.057455 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1293fe12-0f59-44fb-b726-9d72c790dabd-kube-api-access-rvkzv" (OuterVolumeSpecName: "kube-api-access-rvkzv") pod "1293fe12-0f59-44fb-b726-9d72c790dabd" (UID: "1293fe12-0f59-44fb-b726-9d72c790dabd"). InnerVolumeSpecName "kube-api-access-rvkzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 17:00:15 crc kubenswrapper[4730]: I0320 17:00:15.153364 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvkzv\" (UniqueName: \"kubernetes.io/projected/1293fe12-0f59-44fb-b726-9d72c790dabd-kube-api-access-rvkzv\") on node \"crc\" DevicePath \"\"" Mar 20 17:00:15 crc kubenswrapper[4730]: I0320 17:00:15.566322 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567100-vgj2v" event={"ID":"1293fe12-0f59-44fb-b726-9d72c790dabd","Type":"ContainerDied","Data":"e1c9976bc45f87566b1b1016223c485c739f36677650be579d93b3378ac982b4"} Mar 20 17:00:15 crc kubenswrapper[4730]: I0320 17:00:15.566397 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1c9976bc45f87566b1b1016223c485c739f36677650be579d93b3378ac982b4" Mar 20 17:00:15 crc kubenswrapper[4730]: I0320 17:00:15.566468 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567100-vgj2v" Mar 20 17:00:16 crc kubenswrapper[4730]: I0320 17:00:16.051864 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567094-5zljx"] Mar 20 17:00:16 crc kubenswrapper[4730]: I0320 17:00:16.060308 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567094-5zljx"] Mar 20 17:00:17 crc kubenswrapper[4730]: I0320 17:00:17.560113 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a009be79-bc2f-45ca-94b9-f0da37a6abdc" path="/var/lib/kubelet/pods/a009be79-bc2f-45ca-94b9-f0da37a6abdc/volumes" Mar 20 17:00:24 crc kubenswrapper[4730]: I0320 17:00:24.533385 4730 scope.go:117] "RemoveContainer" containerID="96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9" Mar 20 17:00:24 crc kubenswrapper[4730]: E0320 17:00:24.534290 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:00:38 crc kubenswrapper[4730]: I0320 17:00:38.533932 4730 scope.go:117] "RemoveContainer" containerID="96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9" Mar 20 17:00:38 crc kubenswrapper[4730]: E0320 17:00:38.534988 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:00:52 crc kubenswrapper[4730]: I0320 17:00:52.167679 4730 scope.go:117] "RemoveContainer" containerID="e6a0ef74485773b0b9248ebe8daaaea6116dc164de300c92368c1b1d44b5c372" Mar 20 17:00:52 crc kubenswrapper[4730]: I0320 17:00:52.203283 4730 scope.go:117] "RemoveContainer" containerID="bfe54023e94fe2434c4e1c76acbd4f6a1cd0b3f0a5a82a87fd8b931a2f1901c9" Mar 20 17:00:52 crc kubenswrapper[4730]: I0320 17:00:52.533054 4730 scope.go:117] "RemoveContainer" containerID="96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9" Mar 20 17:00:52 crc kubenswrapper[4730]: E0320 17:00:52.533681 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:01:00 crc kubenswrapper[4730]: I0320 17:01:00.187926 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29567101-lzlmb"] Mar 20 17:01:00 crc kubenswrapper[4730]: E0320 17:01:00.189023 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85790076-e9c5-4a47-9a91-23e3371238e0" containerName="collect-profiles" Mar 20 17:01:00 crc kubenswrapper[4730]: I0320 17:01:00.189039 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="85790076-e9c5-4a47-9a91-23e3371238e0" containerName="collect-profiles" Mar 20 17:01:00 crc kubenswrapper[4730]: E0320 17:01:00.189051 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1293fe12-0f59-44fb-b726-9d72c790dabd" containerName="oc" Mar 20 17:01:00 crc kubenswrapper[4730]: I0320 17:01:00.189059 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="1293fe12-0f59-44fb-b726-9d72c790dabd" containerName="oc" Mar 20 17:01:00 crc kubenswrapper[4730]: I0320 17:01:00.189344 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="85790076-e9c5-4a47-9a91-23e3371238e0" containerName="collect-profiles" Mar 20 17:01:00 crc kubenswrapper[4730]: I0320 17:01:00.189356 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="1293fe12-0f59-44fb-b726-9d72c790dabd" containerName="oc" Mar 20 17:01:00 crc kubenswrapper[4730]: I0320 17:01:00.190163 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29567101-lzlmb" Mar 20 17:01:00 crc kubenswrapper[4730]: I0320 17:01:00.199890 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29567101-lzlmb"] Mar 20 17:01:00 crc kubenswrapper[4730]: I0320 17:01:00.336946 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a113133-c537-41c7-a14e-614fb8bcd24f-config-data\") pod \"keystone-cron-29567101-lzlmb\" (UID: \"8a113133-c537-41c7-a14e-614fb8bcd24f\") " pod="openstack/keystone-cron-29567101-lzlmb" Mar 20 17:01:00 crc kubenswrapper[4730]: I0320 17:01:00.337429 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a113133-c537-41c7-a14e-614fb8bcd24f-combined-ca-bundle\") pod \"keystone-cron-29567101-lzlmb\" (UID: \"8a113133-c537-41c7-a14e-614fb8bcd24f\") " pod="openstack/keystone-cron-29567101-lzlmb" Mar 20 17:01:00 crc kubenswrapper[4730]: I0320 17:01:00.337474 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a113133-c537-41c7-a14e-614fb8bcd24f-fernet-keys\") pod \"keystone-cron-29567101-lzlmb\" (UID: \"8a113133-c537-41c7-a14e-614fb8bcd24f\") " pod="openstack/keystone-cron-29567101-lzlmb" Mar 20 17:01:00 crc kubenswrapper[4730]: I0320 17:01:00.337530 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25frt\" (UniqueName: \"kubernetes.io/projected/8a113133-c537-41c7-a14e-614fb8bcd24f-kube-api-access-25frt\") pod \"keystone-cron-29567101-lzlmb\" (UID: \"8a113133-c537-41c7-a14e-614fb8bcd24f\") " pod="openstack/keystone-cron-29567101-lzlmb" Mar 20 17:01:00 crc kubenswrapper[4730]: I0320 17:01:00.439710 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a113133-c537-41c7-a14e-614fb8bcd24f-combined-ca-bundle\") pod \"keystone-cron-29567101-lzlmb\" (UID: \"8a113133-c537-41c7-a14e-614fb8bcd24f\") " pod="openstack/keystone-cron-29567101-lzlmb" Mar 20 17:01:00 crc kubenswrapper[4730]: I0320 17:01:00.439868 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a113133-c537-41c7-a14e-614fb8bcd24f-fernet-keys\") pod \"keystone-cron-29567101-lzlmb\" (UID: \"8a113133-c537-41c7-a14e-614fb8bcd24f\") " pod="openstack/keystone-cron-29567101-lzlmb" Mar 20 17:01:00 crc kubenswrapper[4730]: I0320 17:01:00.440013 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25frt\" (UniqueName: \"kubernetes.io/projected/8a113133-c537-41c7-a14e-614fb8bcd24f-kube-api-access-25frt\") pod \"keystone-cron-29567101-lzlmb\" (UID: \"8a113133-c537-41c7-a14e-614fb8bcd24f\") " pod="openstack/keystone-cron-29567101-lzlmb" Mar 20 17:01:00 crc kubenswrapper[4730]: I0320 17:01:00.440290 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a113133-c537-41c7-a14e-614fb8bcd24f-config-data\") pod \"keystone-cron-29567101-lzlmb\" (UID: \"8a113133-c537-41c7-a14e-614fb8bcd24f\") " pod="openstack/keystone-cron-29567101-lzlmb" Mar 20 17:01:00 crc kubenswrapper[4730]: I0320 17:01:00.447298 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a113133-c537-41c7-a14e-614fb8bcd24f-config-data\") pod \"keystone-cron-29567101-lzlmb\" (UID: \"8a113133-c537-41c7-a14e-614fb8bcd24f\") " pod="openstack/keystone-cron-29567101-lzlmb" Mar 20 17:01:00 crc kubenswrapper[4730]: I0320 17:01:00.447808 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a113133-c537-41c7-a14e-614fb8bcd24f-fernet-keys\") pod \"keystone-cron-29567101-lzlmb\" (UID: \"8a113133-c537-41c7-a14e-614fb8bcd24f\") " pod="openstack/keystone-cron-29567101-lzlmb" Mar 20 17:01:00 crc kubenswrapper[4730]: I0320 17:01:00.454188 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a113133-c537-41c7-a14e-614fb8bcd24f-combined-ca-bundle\") pod \"keystone-cron-29567101-lzlmb\" (UID: \"8a113133-c537-41c7-a14e-614fb8bcd24f\") " pod="openstack/keystone-cron-29567101-lzlmb" Mar 20 17:01:00 crc kubenswrapper[4730]: I0320 17:01:00.460669 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25frt\" (UniqueName: \"kubernetes.io/projected/8a113133-c537-41c7-a14e-614fb8bcd24f-kube-api-access-25frt\") pod \"keystone-cron-29567101-lzlmb\" (UID: \"8a113133-c537-41c7-a14e-614fb8bcd24f\") " pod="openstack/keystone-cron-29567101-lzlmb" Mar 20 17:01:00 crc kubenswrapper[4730]: I0320 17:01:00.518689 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29567101-lzlmb" Mar 20 17:01:01 crc kubenswrapper[4730]: I0320 17:01:01.013505 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29567101-lzlmb"] Mar 20 17:01:01 crc kubenswrapper[4730]: I0320 17:01:01.640109 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29567101-lzlmb" event={"ID":"8a113133-c537-41c7-a14e-614fb8bcd24f","Type":"ContainerStarted","Data":"0978ba5bfa0ab31eba2979df1d165332fe49e303739053270ed423cb2aa3e09c"} Mar 20 17:01:01 crc kubenswrapper[4730]: I0320 17:01:01.640506 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29567101-lzlmb" event={"ID":"8a113133-c537-41c7-a14e-614fb8bcd24f","Type":"ContainerStarted","Data":"21e481ed3b6f703dfa039524e2814e97aab2752e3bd7ea80b43efd84a998db93"} Mar 20 17:01:01 crc kubenswrapper[4730]: I0320 17:01:01.666507 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29567101-lzlmb" podStartSLOduration=1.666490099 podStartE2EDuration="1.666490099s" podCreationTimestamp="2026-03-20 17:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 17:01:01.656416232 +0000 UTC m=+4920.869787621" watchObservedRunningTime="2026-03-20 17:01:01.666490099 +0000 UTC m=+4920.879861468" Mar 20 17:01:05 crc kubenswrapper[4730]: I0320 17:01:05.687298 4730 generic.go:334] "Generic (PLEG): container finished" podID="8a113133-c537-41c7-a14e-614fb8bcd24f" containerID="0978ba5bfa0ab31eba2979df1d165332fe49e303739053270ed423cb2aa3e09c" exitCode=0 Mar 20 17:01:05 crc kubenswrapper[4730]: I0320 17:01:05.687350 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29567101-lzlmb" event={"ID":"8a113133-c537-41c7-a14e-614fb8bcd24f","Type":"ContainerDied","Data":"0978ba5bfa0ab31eba2979df1d165332fe49e303739053270ed423cb2aa3e09c"} Mar 20 17:01:06 crc kubenswrapper[4730]: I0320 17:01:06.534176 4730 scope.go:117] "RemoveContainer" containerID="96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9" Mar 20 17:01:06 crc kubenswrapper[4730]: E0320 17:01:06.535439 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:01:07 crc kubenswrapper[4730]: I0320 17:01:07.522519 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29567101-lzlmb" Mar 20 17:01:07 crc kubenswrapper[4730]: I0320 17:01:07.706787 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29567101-lzlmb" event={"ID":"8a113133-c537-41c7-a14e-614fb8bcd24f","Type":"ContainerDied","Data":"21e481ed3b6f703dfa039524e2814e97aab2752e3bd7ea80b43efd84a998db93"} Mar 20 17:01:07 crc kubenswrapper[4730]: I0320 17:01:07.706824 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21e481ed3b6f703dfa039524e2814e97aab2752e3bd7ea80b43efd84a998db93" Mar 20 17:01:07 crc kubenswrapper[4730]: I0320 17:01:07.706870 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29567101-lzlmb" Mar 20 17:01:07 crc kubenswrapper[4730]: I0320 17:01:07.710452 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a113133-c537-41c7-a14e-614fb8bcd24f-fernet-keys\") pod \"8a113133-c537-41c7-a14e-614fb8bcd24f\" (UID: \"8a113133-c537-41c7-a14e-614fb8bcd24f\") " Mar 20 17:01:07 crc kubenswrapper[4730]: I0320 17:01:07.710512 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a113133-c537-41c7-a14e-614fb8bcd24f-combined-ca-bundle\") pod \"8a113133-c537-41c7-a14e-614fb8bcd24f\" (UID: \"8a113133-c537-41c7-a14e-614fb8bcd24f\") " Mar 20 17:01:07 crc kubenswrapper[4730]: I0320 17:01:07.710583 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25frt\" (UniqueName: \"kubernetes.io/projected/8a113133-c537-41c7-a14e-614fb8bcd24f-kube-api-access-25frt\") pod \"8a113133-c537-41c7-a14e-614fb8bcd24f\" (UID: \"8a113133-c537-41c7-a14e-614fb8bcd24f\") " Mar 20 17:01:07 crc kubenswrapper[4730]: I0320 17:01:07.710657 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a113133-c537-41c7-a14e-614fb8bcd24f-config-data\") pod \"8a113133-c537-41c7-a14e-614fb8bcd24f\" (UID: \"8a113133-c537-41c7-a14e-614fb8bcd24f\") " Mar 20 17:01:07 crc kubenswrapper[4730]: I0320 17:01:07.723468 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a113133-c537-41c7-a14e-614fb8bcd24f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8a113133-c537-41c7-a14e-614fb8bcd24f" (UID: "8a113133-c537-41c7-a14e-614fb8bcd24f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 17:01:07 crc kubenswrapper[4730]: I0320 17:01:07.723521 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a113133-c537-41c7-a14e-614fb8bcd24f-kube-api-access-25frt" (OuterVolumeSpecName: "kube-api-access-25frt") pod "8a113133-c537-41c7-a14e-614fb8bcd24f" (UID: "8a113133-c537-41c7-a14e-614fb8bcd24f"). InnerVolumeSpecName "kube-api-access-25frt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 17:01:07 crc kubenswrapper[4730]: I0320 17:01:07.762763 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a113133-c537-41c7-a14e-614fb8bcd24f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a113133-c537-41c7-a14e-614fb8bcd24f" (UID: "8a113133-c537-41c7-a14e-614fb8bcd24f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 17:01:07 crc kubenswrapper[4730]: I0320 17:01:07.778962 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a113133-c537-41c7-a14e-614fb8bcd24f-config-data" (OuterVolumeSpecName: "config-data") pod "8a113133-c537-41c7-a14e-614fb8bcd24f" (UID: "8a113133-c537-41c7-a14e-614fb8bcd24f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 17:01:07 crc kubenswrapper[4730]: I0320 17:01:07.812834 4730 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a113133-c537-41c7-a14e-614fb8bcd24f-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 20 17:01:07 crc kubenswrapper[4730]: I0320 17:01:07.812876 4730 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a113133-c537-41c7-a14e-614fb8bcd24f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 17:01:07 crc kubenswrapper[4730]: I0320 17:01:07.812894 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25frt\" (UniqueName: \"kubernetes.io/projected/8a113133-c537-41c7-a14e-614fb8bcd24f-kube-api-access-25frt\") on node \"crc\" DevicePath \"\"" Mar 20 17:01:07 crc kubenswrapper[4730]: I0320 17:01:07.812906 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a113133-c537-41c7-a14e-614fb8bcd24f-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 17:01:21 crc kubenswrapper[4730]: I0320 17:01:21.541414 4730 scope.go:117] "RemoveContainer" containerID="96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9" Mar 20 17:01:21 crc kubenswrapper[4730]: E0320 17:01:21.542498 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:01:34 crc kubenswrapper[4730]: I0320 17:01:34.533185 4730 scope.go:117] "RemoveContainer" containerID="96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9" Mar 20 17:01:34 crc kubenswrapper[4730]: E0320 17:01:34.534268 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:01:45 crc kubenswrapper[4730]: I0320 17:01:45.533692 4730 scope.go:117] "RemoveContainer" containerID="96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9" Mar 20 17:01:46 crc kubenswrapper[4730]: I0320 17:01:46.148753 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerStarted","Data":"c3ac4abf290606f9ec67064e3bf0182c8ecb8c9be4ecf85a1bb60feb02fd27e6"} Mar 20 17:01:54 crc kubenswrapper[4730]: I0320 17:01:54.240841 4730 generic.go:334] "Generic (PLEG): container finished" podID="c69a80b5-69a7-48c5-8ad4-5063b6cb4676" containerID="7d54b7219e1263f587317fcdebaae6f3c46012a7941ad45c24813ffa14627f5b" exitCode=1 Mar 20 17:01:54 crc kubenswrapper[4730]: I0320 17:01:54.240925 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"c69a80b5-69a7-48c5-8ad4-5063b6cb4676","Type":"ContainerDied","Data":"7d54b7219e1263f587317fcdebaae6f3c46012a7941ad45c24813ffa14627f5b"} Mar 20 17:01:55 crc kubenswrapper[4730]: I0320 17:01:55.774022 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 20 17:01:55 crc kubenswrapper[4730]: I0320 17:01:55.847673 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-test-operator-ephemeral-workdir\") pod \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " Mar 20 17:01:55 crc kubenswrapper[4730]: I0320 17:01:55.847732 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-ca-certs\") pod \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " Mar 20 17:01:55 crc kubenswrapper[4730]: I0320 17:01:55.847756 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75t8r\" (UniqueName: \"kubernetes.io/projected/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-kube-api-access-75t8r\") pod \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " Mar 20 17:01:55 crc kubenswrapper[4730]: I0320 17:01:55.847793 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-openstack-config-secret\") pod \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " Mar 20 17:01:55 crc kubenswrapper[4730]: I0320 17:01:55.847870 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-config-data\") pod \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " Mar 20 17:01:55 crc kubenswrapper[4730]: I0320 17:01:55.847952 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " Mar 20 17:01:55 crc kubenswrapper[4730]: I0320 17:01:55.847991 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-ssh-key\") pod \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " Mar 20 17:01:55 crc kubenswrapper[4730]: I0320 17:01:55.848023 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-openstack-config\") pod \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " Mar 20 17:01:55 crc kubenswrapper[4730]: I0320 17:01:55.848119 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-test-operator-ephemeral-temporary\") pod \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\" (UID: \"c69a80b5-69a7-48c5-8ad4-5063b6cb4676\") " Mar 20 17:01:55 crc kubenswrapper[4730]: I0320 17:01:55.849081 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "c69a80b5-69a7-48c5-8ad4-5063b6cb4676" (UID: "c69a80b5-69a7-48c5-8ad4-5063b6cb4676"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 17:01:55 crc kubenswrapper[4730]: I0320 17:01:55.849981 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-config-data" (OuterVolumeSpecName: "config-data") pod "c69a80b5-69a7-48c5-8ad4-5063b6cb4676" (UID: "c69a80b5-69a7-48c5-8ad4-5063b6cb4676"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 17:01:55 crc kubenswrapper[4730]: I0320 17:01:55.856845 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "c69a80b5-69a7-48c5-8ad4-5063b6cb4676" (UID: "c69a80b5-69a7-48c5-8ad4-5063b6cb4676"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 17:01:55 crc kubenswrapper[4730]: I0320 17:01:55.882162 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-kube-api-access-75t8r" (OuterVolumeSpecName: "kube-api-access-75t8r") pod "c69a80b5-69a7-48c5-8ad4-5063b6cb4676" (UID: "c69a80b5-69a7-48c5-8ad4-5063b6cb4676"). InnerVolumeSpecName "kube-api-access-75t8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 17:01:55 crc kubenswrapper[4730]: I0320 17:01:55.890435 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "c69a80b5-69a7-48c5-8ad4-5063b6cb4676" (UID: "c69a80b5-69a7-48c5-8ad4-5063b6cb4676"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 17:01:55 crc kubenswrapper[4730]: I0320 17:01:55.907415 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "c69a80b5-69a7-48c5-8ad4-5063b6cb4676" (UID: "c69a80b5-69a7-48c5-8ad4-5063b6cb4676"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 17:01:55 crc kubenswrapper[4730]: I0320 17:01:55.919574 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "c69a80b5-69a7-48c5-8ad4-5063b6cb4676" (UID: "c69a80b5-69a7-48c5-8ad4-5063b6cb4676"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 17:01:55 crc kubenswrapper[4730]: I0320 17:01:55.954424 4730 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Mar 20 17:01:55 crc kubenswrapper[4730]: I0320 17:01:55.954461 4730 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Mar 20 17:01:55 crc kubenswrapper[4730]: I0320 17:01:55.954476 4730 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Mar 20 17:01:55 crc kubenswrapper[4730]: I0320 17:01:55.954489 4730 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-ca-certs\") on node \"crc\" DevicePath \"\"" Mar 20 17:01:55 crc kubenswrapper[4730]: I0320 17:01:55.954500 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75t8r\" (UniqueName: \"kubernetes.io/projected/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-kube-api-access-75t8r\") on node \"crc\" DevicePath \"\"" Mar 20 17:01:55 crc kubenswrapper[4730]: I0320 17:01:55.954510 4730 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 20 17:01:55 crc kubenswrapper[4730]: I0320 17:01:55.954523 4730 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 17:01:55 crc kubenswrapper[4730]: I0320 17:01:55.955457 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c69a80b5-69a7-48c5-8ad4-5063b6cb4676" (UID: "c69a80b5-69a7-48c5-8ad4-5063b6cb4676"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 17:01:56 crc kubenswrapper[4730]: I0320 17:01:56.027707 4730 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Mar 20 17:01:56 crc kubenswrapper[4730]: I0320 17:01:56.050276 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "c69a80b5-69a7-48c5-8ad4-5063b6cb4676" (UID: "c69a80b5-69a7-48c5-8ad4-5063b6cb4676"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 17:01:56 crc kubenswrapper[4730]: I0320 17:01:56.056101 4730 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Mar 20 17:01:56 crc kubenswrapper[4730]: I0320 17:01:56.056128 4730 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-ssh-key\") on node \"crc\" DevicePath \"\"" Mar 20 17:01:56 crc kubenswrapper[4730]: I0320 17:01:56.056139 4730 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c69a80b5-69a7-48c5-8ad4-5063b6cb4676-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 20 17:01:56 crc kubenswrapper[4730]: I0320 17:01:56.269702 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"c69a80b5-69a7-48c5-8ad4-5063b6cb4676","Type":"ContainerDied","Data":"3a3b9dc78f4221095ec1a260d19a50071b9bafd10f8f90a8b372cb1bb88e13e5"} Mar 20 17:01:56 crc kubenswrapper[4730]: I0320 17:01:56.269761 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a3b9dc78f4221095ec1a260d19a50071b9bafd10f8f90a8b372cb1bb88e13e5" Mar 20 17:01:56 crc kubenswrapper[4730]: I0320 17:01:56.269803 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 20 17:02:00 crc kubenswrapper[4730]: I0320 17:02:00.167875 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567102-k48jl"] Mar 20 17:02:00 crc kubenswrapper[4730]: E0320 17:02:00.169925 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a113133-c537-41c7-a14e-614fb8bcd24f" containerName="keystone-cron" Mar 20 17:02:00 crc kubenswrapper[4730]: I0320 17:02:00.170032 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a113133-c537-41c7-a14e-614fb8bcd24f" containerName="keystone-cron" Mar 20 17:02:00 crc kubenswrapper[4730]: E0320 17:02:00.170138 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c69a80b5-69a7-48c5-8ad4-5063b6cb4676" containerName="tempest-tests-tempest-tests-runner" Mar 20 17:02:00 crc kubenswrapper[4730]: I0320 17:02:00.170219 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c69a80b5-69a7-48c5-8ad4-5063b6cb4676" containerName="tempest-tests-tempest-tests-runner" Mar 20 17:02:00 crc kubenswrapper[4730]: I0320 17:02:00.170574 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a113133-c537-41c7-a14e-614fb8bcd24f" containerName="keystone-cron" Mar 20 17:02:00 crc kubenswrapper[4730]: I0320 17:02:00.170813 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c69a80b5-69a7-48c5-8ad4-5063b6cb4676" containerName="tempest-tests-tempest-tests-runner" Mar 20 17:02:00 crc kubenswrapper[4730]: I0320 17:02:00.171721 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567102-k48jl" Mar 20 17:02:00 crc kubenswrapper[4730]: I0320 17:02:00.173740 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 17:02:00 crc kubenswrapper[4730]: I0320 17:02:00.173990 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 17:02:00 crc kubenswrapper[4730]: I0320 17:02:00.174842 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 17:02:00 crc kubenswrapper[4730]: I0320 17:02:00.180465 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567102-k48jl"] Mar 20 17:02:00 crc kubenswrapper[4730]: I0320 17:02:00.288787 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z57r\" (UniqueName: \"kubernetes.io/projected/d317b26b-912f-4276-a234-084782092ff3-kube-api-access-9z57r\") pod \"auto-csr-approver-29567102-k48jl\" (UID: \"d317b26b-912f-4276-a234-084782092ff3\") " pod="openshift-infra/auto-csr-approver-29567102-k48jl" Mar 20 17:02:00 crc kubenswrapper[4730]: I0320 17:02:00.317358 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 20 17:02:00 crc kubenswrapper[4730]: I0320 17:02:00.319568 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 20 17:02:00 crc kubenswrapper[4730]: I0320 17:02:00.321867 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-gh48g" Mar 20 17:02:00 crc kubenswrapper[4730]: I0320 17:02:00.326213 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 20 17:02:00 crc kubenswrapper[4730]: I0320 17:02:00.391373 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z57r\" (UniqueName: \"kubernetes.io/projected/d317b26b-912f-4276-a234-084782092ff3-kube-api-access-9z57r\") pod \"auto-csr-approver-29567102-k48jl\" (UID: \"d317b26b-912f-4276-a234-084782092ff3\") " pod="openshift-infra/auto-csr-approver-29567102-k48jl" Mar 20 17:02:00 crc kubenswrapper[4730]: I0320 17:02:00.412282 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z57r\" (UniqueName: \"kubernetes.io/projected/d317b26b-912f-4276-a234-084782092ff3-kube-api-access-9z57r\") pod \"auto-csr-approver-29567102-k48jl\" (UID: \"d317b26b-912f-4276-a234-084782092ff3\") " pod="openshift-infra/auto-csr-approver-29567102-k48jl" Mar 20 17:02:00 crc kubenswrapper[4730]: I0320 17:02:00.489819 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567102-k48jl" Mar 20 17:02:00 crc kubenswrapper[4730]: I0320 17:02:00.494206 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d66zz\" (UniqueName: \"kubernetes.io/projected/d79eb29a-c814-4aa0-a268-2069d58b08d2-kube-api-access-d66zz\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d79eb29a-c814-4aa0-a268-2069d58b08d2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 20 17:02:00 crc kubenswrapper[4730]: I0320 17:02:00.494379 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d79eb29a-c814-4aa0-a268-2069d58b08d2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 20 17:02:00 crc kubenswrapper[4730]: I0320 17:02:00.595859 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d66zz\" (UniqueName: \"kubernetes.io/projected/d79eb29a-c814-4aa0-a268-2069d58b08d2-kube-api-access-d66zz\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d79eb29a-c814-4aa0-a268-2069d58b08d2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 20 17:02:00 crc kubenswrapper[4730]: I0320 17:02:00.595902 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d79eb29a-c814-4aa0-a268-2069d58b08d2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 20 17:02:00 crc kubenswrapper[4730]: I0320 17:02:00.596262 4730 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d79eb29a-c814-4aa0-a268-2069d58b08d2\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 20 17:02:00 crc kubenswrapper[4730]: I0320 17:02:00.619938 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d66zz\" (UniqueName: \"kubernetes.io/projected/d79eb29a-c814-4aa0-a268-2069d58b08d2-kube-api-access-d66zz\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d79eb29a-c814-4aa0-a268-2069d58b08d2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 20 17:02:00 crc kubenswrapper[4730]: I0320 17:02:00.631576 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d79eb29a-c814-4aa0-a268-2069d58b08d2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 20 17:02:00 crc kubenswrapper[4730]: I0320 17:02:00.642706 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 20 17:02:01 crc kubenswrapper[4730]: I0320 17:02:01.089706 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567102-k48jl"] Mar 20 17:02:01 crc kubenswrapper[4730]: W0320 17:02:01.194779 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd79eb29a_c814_4aa0_a268_2069d58b08d2.slice/crio-bda47bb83d0a47e9e8896f47119001302843c3eb091e25a7eb760efbbe20544e WatchSource:0}: Error finding container bda47bb83d0a47e9e8896f47119001302843c3eb091e25a7eb760efbbe20544e: Status 404 returned error can't find the container with id bda47bb83d0a47e9e8896f47119001302843c3eb091e25a7eb760efbbe20544e Mar 20 17:02:01 crc kubenswrapper[4730]: I0320 17:02:01.197213 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 20 17:02:01 crc kubenswrapper[4730]: I0320 17:02:01.321820 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567102-k48jl" event={"ID":"d317b26b-912f-4276-a234-084782092ff3","Type":"ContainerStarted","Data":"e866ddd76cac6a16b70501572318fe6f314ed313536675c55b5aa885d38c9e42"} Mar 20 17:02:01 crc kubenswrapper[4730]: I0320 17:02:01.323380 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"d79eb29a-c814-4aa0-a268-2069d58b08d2","Type":"ContainerStarted","Data":"bda47bb83d0a47e9e8896f47119001302843c3eb091e25a7eb760efbbe20544e"} Mar 20 17:02:03 crc kubenswrapper[4730]: I0320 17:02:03.354560 4730 generic.go:334] "Generic (PLEG): container finished" podID="d317b26b-912f-4276-a234-084782092ff3" containerID="bf551572383a97d7725c248e69428cf0db8c3b25722e05b3bf7b441d82bc6b56" exitCode=0 Mar 20 17:02:03 crc kubenswrapper[4730]: I0320 17:02:03.354634 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567102-k48jl" event={"ID":"d317b26b-912f-4276-a234-084782092ff3","Type":"ContainerDied","Data":"bf551572383a97d7725c248e69428cf0db8c3b25722e05b3bf7b441d82bc6b56"} Mar 20 17:02:03 crc kubenswrapper[4730]: I0320 17:02:03.356828 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"d79eb29a-c814-4aa0-a268-2069d58b08d2","Type":"ContainerStarted","Data":"f6443ae982c113e8299daeacfa8f2b42dffd4d724dbf09dadca8f0160bc9718a"} Mar 20 17:02:03 crc kubenswrapper[4730]: I0320 17:02:03.403056 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.508226136 podStartE2EDuration="3.403035637s" podCreationTimestamp="2026-03-20 17:02:00 +0000 UTC" firstStartedPulling="2026-03-20 17:02:01.196761487 +0000 UTC m=+4980.410132856" lastFinishedPulling="2026-03-20 17:02:02.091570978 +0000 UTC m=+4981.304942357" observedRunningTime="2026-03-20 17:02:03.395860582 +0000 UTC m=+4982.609231961" watchObservedRunningTime="2026-03-20 17:02:03.403035637 +0000 UTC m=+4982.616407006" Mar 20 17:02:04 crc kubenswrapper[4730]: I0320 17:02:04.763408 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567102-k48jl" Mar 20 17:02:04 crc kubenswrapper[4730]: I0320 17:02:04.898719 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z57r\" (UniqueName: \"kubernetes.io/projected/d317b26b-912f-4276-a234-084782092ff3-kube-api-access-9z57r\") pod \"d317b26b-912f-4276-a234-084782092ff3\" (UID: \"d317b26b-912f-4276-a234-084782092ff3\") " Mar 20 17:02:04 crc kubenswrapper[4730]: I0320 17:02:04.907545 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d317b26b-912f-4276-a234-084782092ff3-kube-api-access-9z57r" (OuterVolumeSpecName: "kube-api-access-9z57r") pod "d317b26b-912f-4276-a234-084782092ff3" (UID: "d317b26b-912f-4276-a234-084782092ff3"). InnerVolumeSpecName "kube-api-access-9z57r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 17:02:05 crc kubenswrapper[4730]: I0320 17:02:05.002063 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z57r\" (UniqueName: \"kubernetes.io/projected/d317b26b-912f-4276-a234-084782092ff3-kube-api-access-9z57r\") on node \"crc\" DevicePath \"\"" Mar 20 17:02:05 crc kubenswrapper[4730]: I0320 17:02:05.382548 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567102-k48jl" event={"ID":"d317b26b-912f-4276-a234-084782092ff3","Type":"ContainerDied","Data":"e866ddd76cac6a16b70501572318fe6f314ed313536675c55b5aa885d38c9e42"} Mar 20 17:02:05 crc kubenswrapper[4730]: I0320 17:02:05.383048 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e866ddd76cac6a16b70501572318fe6f314ed313536675c55b5aa885d38c9e42" Mar 20 17:02:05 crc kubenswrapper[4730]: I0320 17:02:05.382696 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567102-k48jl" Mar 20 17:02:05 crc kubenswrapper[4730]: I0320 17:02:05.857552 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567096-xvkcr"] Mar 20 17:02:05 crc kubenswrapper[4730]: I0320 17:02:05.869429 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567096-xvkcr"] Mar 20 17:02:07 crc kubenswrapper[4730]: I0320 17:02:07.554521 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85cd1ac-f48f-46a3-81e3-f82b73719cb1" path="/var/lib/kubelet/pods/f85cd1ac-f48f-46a3-81e3-f82b73719cb1/volumes" Mar 20 17:02:46 crc kubenswrapper[4730]: I0320 17:02:46.831374 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zs57x/must-gather-nxsd6"] Mar 20 17:02:46 crc kubenswrapper[4730]: E0320 17:02:46.832278 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d317b26b-912f-4276-a234-084782092ff3" containerName="oc" Mar 20 17:02:46 crc kubenswrapper[4730]: I0320 17:02:46.832292 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d317b26b-912f-4276-a234-084782092ff3" containerName="oc" Mar 20 17:02:46 crc kubenswrapper[4730]: I0320 17:02:46.832494 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="d317b26b-912f-4276-a234-084782092ff3" containerName="oc" Mar 20 17:02:46 crc kubenswrapper[4730]: I0320 17:02:46.833506 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zs57x/must-gather-nxsd6" Mar 20 17:02:46 crc kubenswrapper[4730]: I0320 17:02:46.835357 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-zs57x"/"default-dockercfg-kdxq2" Mar 20 17:02:46 crc kubenswrapper[4730]: I0320 17:02:46.836378 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zs57x"/"kube-root-ca.crt" Mar 20 17:02:46 crc kubenswrapper[4730]: I0320 17:02:46.837924 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zs57x"/"openshift-service-ca.crt" Mar 20 17:02:46 crc kubenswrapper[4730]: I0320 17:02:46.853795 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zs57x/must-gather-nxsd6"] Mar 20 17:02:46 crc kubenswrapper[4730]: I0320 17:02:46.927780 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrzpq\" (UniqueName: \"kubernetes.io/projected/4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d-kube-api-access-rrzpq\") pod \"must-gather-nxsd6\" (UID: \"4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d\") " pod="openshift-must-gather-zs57x/must-gather-nxsd6" Mar 20 17:02:46 crc kubenswrapper[4730]: I0320 17:02:46.927908 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d-must-gather-output\") pod \"must-gather-nxsd6\" (UID: \"4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d\") " pod="openshift-must-gather-zs57x/must-gather-nxsd6" Mar 20 17:02:47 crc kubenswrapper[4730]: I0320 17:02:47.029293 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrzpq\" (UniqueName: \"kubernetes.io/projected/4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d-kube-api-access-rrzpq\") pod \"must-gather-nxsd6\" (UID: \"4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d\") " pod="openshift-must-gather-zs57x/must-gather-nxsd6" Mar 20 17:02:47 crc kubenswrapper[4730]: I0320 17:02:47.029421 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d-must-gather-output\") pod \"must-gather-nxsd6\" (UID: \"4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d\") " pod="openshift-must-gather-zs57x/must-gather-nxsd6" Mar 20 17:02:47 crc kubenswrapper[4730]: I0320 17:02:47.029807 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d-must-gather-output\") pod \"must-gather-nxsd6\" (UID: \"4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d\") " pod="openshift-must-gather-zs57x/must-gather-nxsd6" Mar 20 17:02:47 crc kubenswrapper[4730]: I0320 17:02:47.047561 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrzpq\" (UniqueName: \"kubernetes.io/projected/4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d-kube-api-access-rrzpq\") pod \"must-gather-nxsd6\" (UID: \"4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d\") " pod="openshift-must-gather-zs57x/must-gather-nxsd6" Mar 20 17:02:47 crc kubenswrapper[4730]: I0320 17:02:47.152453 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zs57x/must-gather-nxsd6" Mar 20 17:02:47 crc kubenswrapper[4730]: I0320 17:02:47.631366 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zs57x/must-gather-nxsd6"] Mar 20 17:02:47 crc kubenswrapper[4730]: I0320 17:02:47.886238 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zs57x/must-gather-nxsd6" event={"ID":"4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d","Type":"ContainerStarted","Data":"3bc790ab8f6e59c3b86417b0a55bed11ef4d53e15381c3de1bb130ff16d58cbf"} Mar 20 17:02:52 crc kubenswrapper[4730]: I0320 17:02:52.379240 4730 scope.go:117] "RemoveContainer" containerID="ad05ce67547cd2c7c1cc69cf885883fc4049286efb953f0b2d8378cbd56d924f" Mar 20 17:02:54 crc kubenswrapper[4730]: I0320 17:02:54.991238 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zs57x/must-gather-nxsd6" event={"ID":"4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d","Type":"ContainerStarted","Data":"5f9bd62b83471aba96f37df030e3e1c5b574f8d8437b38bdf436e67eecd3f71d"} Mar 20 17:02:54 crc kubenswrapper[4730]: I0320 17:02:54.991798 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zs57x/must-gather-nxsd6" event={"ID":"4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d","Type":"ContainerStarted","Data":"5d6c90f5aec42032ef5d1044f85f5311f53e8138aa56ffcadeca5419bfb41e2e"} Mar 20 17:02:55 crc kubenswrapper[4730]: I0320 17:02:55.006859 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zs57x/must-gather-nxsd6" podStartSLOduration=2.2706341119999998 podStartE2EDuration="9.006843952s" podCreationTimestamp="2026-03-20 17:02:46 +0000 UTC" firstStartedPulling="2026-03-20 17:02:47.635884781 +0000 UTC m=+5026.849256150" lastFinishedPulling="2026-03-20 17:02:54.372094611 +0000 UTC m=+5033.585465990" observedRunningTime="2026-03-20 17:02:55.004183856 +0000 UTC m=+5034.217555225" watchObservedRunningTime="2026-03-20 17:02:55.006843952 +0000 UTC m=+5034.220215321" Mar 20 17:02:58 crc kubenswrapper[4730]: I0320 17:02:58.756608 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zs57x/crc-debug-xtxgl"] Mar 20 17:02:58 crc kubenswrapper[4730]: I0320 17:02:58.758432 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zs57x/crc-debug-xtxgl" Mar 20 17:02:58 crc kubenswrapper[4730]: I0320 17:02:58.845691 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd7a63f3-7b9b-489f-b957-6d5e10689cae-host\") pod \"crc-debug-xtxgl\" (UID: \"bd7a63f3-7b9b-489f-b957-6d5e10689cae\") " pod="openshift-must-gather-zs57x/crc-debug-xtxgl" Mar 20 17:02:58 crc kubenswrapper[4730]: I0320 17:02:58.845843 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bj9r\" (UniqueName: \"kubernetes.io/projected/bd7a63f3-7b9b-489f-b957-6d5e10689cae-kube-api-access-8bj9r\") pod \"crc-debug-xtxgl\" (UID: \"bd7a63f3-7b9b-489f-b957-6d5e10689cae\") " pod="openshift-must-gather-zs57x/crc-debug-xtxgl" Mar 20 17:02:58 crc kubenswrapper[4730]: I0320 17:02:58.947219 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd7a63f3-7b9b-489f-b957-6d5e10689cae-host\") pod \"crc-debug-xtxgl\" (UID: \"bd7a63f3-7b9b-489f-b957-6d5e10689cae\") " pod="openshift-must-gather-zs57x/crc-debug-xtxgl" Mar 20 17:02:58 crc kubenswrapper[4730]: I0320 17:02:58.947369 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd7a63f3-7b9b-489f-b957-6d5e10689cae-host\") pod \"crc-debug-xtxgl\" (UID: \"bd7a63f3-7b9b-489f-b957-6d5e10689cae\") " pod="openshift-must-gather-zs57x/crc-debug-xtxgl" Mar 20 17:02:58 crc kubenswrapper[4730]: I0320 17:02:58.947386 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bj9r\" (UniqueName: \"kubernetes.io/projected/bd7a63f3-7b9b-489f-b957-6d5e10689cae-kube-api-access-8bj9r\") pod \"crc-debug-xtxgl\" (UID: \"bd7a63f3-7b9b-489f-b957-6d5e10689cae\") " pod="openshift-must-gather-zs57x/crc-debug-xtxgl" Mar 20 17:02:58 crc kubenswrapper[4730]: I0320 17:02:58.970361 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bj9r\" (UniqueName: \"kubernetes.io/projected/bd7a63f3-7b9b-489f-b957-6d5e10689cae-kube-api-access-8bj9r\") pod \"crc-debug-xtxgl\" (UID: \"bd7a63f3-7b9b-489f-b957-6d5e10689cae\") " pod="openshift-must-gather-zs57x/crc-debug-xtxgl" Mar 20 17:02:59 crc kubenswrapper[4730]: I0320 17:02:59.086586 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zs57x/crc-debug-xtxgl" Mar 20 17:03:00 crc kubenswrapper[4730]: I0320 17:03:00.043908 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zs57x/crc-debug-xtxgl" event={"ID":"bd7a63f3-7b9b-489f-b957-6d5e10689cae","Type":"ContainerStarted","Data":"6c374a1f1385b9e2b38e32d1f04faa659c4af4e23397e6bc76bac1e6d675b60a"} Mar 20 17:03:10 crc kubenswrapper[4730]: I0320 17:03:10.142185 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zs57x/crc-debug-xtxgl" event={"ID":"bd7a63f3-7b9b-489f-b957-6d5e10689cae","Type":"ContainerStarted","Data":"7cfa3c4c40647b5e6e0d0655d8ae502be7742a62b0dcf48099fd5c14403c3160"} Mar 20 17:03:10 crc kubenswrapper[4730]: I0320 17:03:10.164541 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zs57x/crc-debug-xtxgl" podStartSLOduration=1.8728616009999999 podStartE2EDuration="12.164517308s" podCreationTimestamp="2026-03-20 17:02:58 +0000 UTC" firstStartedPulling="2026-03-20 17:02:59.151046066 +0000 UTC m=+5038.364417435" lastFinishedPulling="2026-03-20 17:03:09.442701773 +0000 UTC m=+5048.656073142" observedRunningTime="2026-03-20 17:03:10.157807467 +0000 UTC m=+5049.371178836" watchObservedRunningTime="2026-03-20 17:03:10.164517308 +0000 UTC m=+5049.377888687" Mar 20 17:03:54 crc kubenswrapper[4730]: I0320 17:03:54.590896 4730 generic.go:334] "Generic (PLEG): container finished" podID="bd7a63f3-7b9b-489f-b957-6d5e10689cae" containerID="7cfa3c4c40647b5e6e0d0655d8ae502be7742a62b0dcf48099fd5c14403c3160" exitCode=0 Mar 20 17:03:54 crc kubenswrapper[4730]: I0320 17:03:54.590973 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zs57x/crc-debug-xtxgl" event={"ID":"bd7a63f3-7b9b-489f-b957-6d5e10689cae","Type":"ContainerDied","Data":"7cfa3c4c40647b5e6e0d0655d8ae502be7742a62b0dcf48099fd5c14403c3160"} Mar 20 17:03:55 crc kubenswrapper[4730]: I0320 17:03:55.758761 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zs57x/crc-debug-xtxgl" Mar 20 17:03:55 crc kubenswrapper[4730]: I0320 17:03:55.796923 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zs57x/crc-debug-xtxgl"] Mar 20 17:03:55 crc kubenswrapper[4730]: I0320 17:03:55.809387 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zs57x/crc-debug-xtxgl"] Mar 20 17:03:55 crc kubenswrapper[4730]: I0320 17:03:55.948566 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bj9r\" (UniqueName: \"kubernetes.io/projected/bd7a63f3-7b9b-489f-b957-6d5e10689cae-kube-api-access-8bj9r\") pod \"bd7a63f3-7b9b-489f-b957-6d5e10689cae\" (UID: \"bd7a63f3-7b9b-489f-b957-6d5e10689cae\") " Mar 20 17:03:55 crc kubenswrapper[4730]: I0320 17:03:55.948701 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd7a63f3-7b9b-489f-b957-6d5e10689cae-host\") pod \"bd7a63f3-7b9b-489f-b957-6d5e10689cae\" (UID: \"bd7a63f3-7b9b-489f-b957-6d5e10689cae\") " Mar 20 17:03:55 crc kubenswrapper[4730]: I0320 17:03:55.948758 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7a63f3-7b9b-489f-b957-6d5e10689cae-host" (OuterVolumeSpecName: "host") pod "bd7a63f3-7b9b-489f-b957-6d5e10689cae" (UID: "bd7a63f3-7b9b-489f-b957-6d5e10689cae"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 17:03:55 crc kubenswrapper[4730]: I0320 17:03:55.949905 4730 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd7a63f3-7b9b-489f-b957-6d5e10689cae-host\") on node \"crc\" DevicePath \"\"" Mar 20 17:03:55 crc kubenswrapper[4730]: I0320 17:03:55.960631 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd7a63f3-7b9b-489f-b957-6d5e10689cae-kube-api-access-8bj9r" (OuterVolumeSpecName: "kube-api-access-8bj9r") pod "bd7a63f3-7b9b-489f-b957-6d5e10689cae" (UID: "bd7a63f3-7b9b-489f-b957-6d5e10689cae"). InnerVolumeSpecName "kube-api-access-8bj9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 17:03:56 crc kubenswrapper[4730]: I0320 17:03:56.053060 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bj9r\" (UniqueName: \"kubernetes.io/projected/bd7a63f3-7b9b-489f-b957-6d5e10689cae-kube-api-access-8bj9r\") on node \"crc\" DevicePath \"\"" Mar 20 17:03:56 crc kubenswrapper[4730]: I0320 17:03:56.636636 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c374a1f1385b9e2b38e32d1f04faa659c4af4e23397e6bc76bac1e6d675b60a" Mar 20 17:03:56 crc kubenswrapper[4730]: I0320 17:03:56.636792 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zs57x/crc-debug-xtxgl" Mar 20 17:03:56 crc kubenswrapper[4730]: I0320 17:03:56.990548 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zs57x/crc-debug-r2nsh"] Mar 20 17:03:56 crc kubenswrapper[4730]: E0320 17:03:56.990977 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7a63f3-7b9b-489f-b957-6d5e10689cae" containerName="container-00" Mar 20 17:03:56 crc kubenswrapper[4730]: I0320 17:03:56.990990 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7a63f3-7b9b-489f-b957-6d5e10689cae" containerName="container-00" Mar 20 17:03:56 crc kubenswrapper[4730]: I0320 17:03:56.991224 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7a63f3-7b9b-489f-b957-6d5e10689cae" containerName="container-00" Mar 20 17:03:56 crc kubenswrapper[4730]: I0320 17:03:56.991904 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zs57x/crc-debug-r2nsh" Mar 20 17:03:57 crc kubenswrapper[4730]: I0320 17:03:57.078149 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j5cn\" (UniqueName: \"kubernetes.io/projected/9b604a34-be8f-425a-aef6-c3e9581035d7-kube-api-access-2j5cn\") pod \"crc-debug-r2nsh\" (UID: \"9b604a34-be8f-425a-aef6-c3e9581035d7\") " pod="openshift-must-gather-zs57x/crc-debug-r2nsh" Mar 20 17:03:57 crc kubenswrapper[4730]: I0320 17:03:57.078407 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b604a34-be8f-425a-aef6-c3e9581035d7-host\") pod \"crc-debug-r2nsh\" (UID: \"9b604a34-be8f-425a-aef6-c3e9581035d7\") " pod="openshift-must-gather-zs57x/crc-debug-r2nsh" Mar 20 17:03:57 crc kubenswrapper[4730]: I0320 17:03:57.179367 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b604a34-be8f-425a-aef6-c3e9581035d7-host\") pod \"crc-debug-r2nsh\" (UID: \"9b604a34-be8f-425a-aef6-c3e9581035d7\") " pod="openshift-must-gather-zs57x/crc-debug-r2nsh" Mar 20 17:03:57 crc kubenswrapper[4730]: I0320 17:03:57.179510 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j5cn\" (UniqueName: \"kubernetes.io/projected/9b604a34-be8f-425a-aef6-c3e9581035d7-kube-api-access-2j5cn\") pod \"crc-debug-r2nsh\" (UID: \"9b604a34-be8f-425a-aef6-c3e9581035d7\") " pod="openshift-must-gather-zs57x/crc-debug-r2nsh" Mar 20 17:03:57 crc kubenswrapper[4730]: I0320 17:03:57.179847 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b604a34-be8f-425a-aef6-c3e9581035d7-host\") pod \"crc-debug-r2nsh\" (UID: \"9b604a34-be8f-425a-aef6-c3e9581035d7\") " pod="openshift-must-gather-zs57x/crc-debug-r2nsh" Mar 20 17:03:57 crc kubenswrapper[4730]: I0320 17:03:57.200082 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j5cn\" (UniqueName: \"kubernetes.io/projected/9b604a34-be8f-425a-aef6-c3e9581035d7-kube-api-access-2j5cn\") pod \"crc-debug-r2nsh\" (UID: \"9b604a34-be8f-425a-aef6-c3e9581035d7\") " pod="openshift-must-gather-zs57x/crc-debug-r2nsh" Mar 20 17:03:57 crc kubenswrapper[4730]: I0320 17:03:57.310183 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zs57x/crc-debug-r2nsh" Mar 20 17:03:57 crc kubenswrapper[4730]: I0320 17:03:57.543283 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd7a63f3-7b9b-489f-b957-6d5e10689cae" path="/var/lib/kubelet/pods/bd7a63f3-7b9b-489f-b957-6d5e10689cae/volumes" Mar 20 17:03:57 crc kubenswrapper[4730]: I0320 17:03:57.647208 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zs57x/crc-debug-r2nsh" event={"ID":"9b604a34-be8f-425a-aef6-c3e9581035d7","Type":"ContainerStarted","Data":"f2a5b11498b3565ac584ae0844e2ddcd32e123d95c3d6fe4c6c0d6653ce556a9"} Mar 20 17:03:57 crc kubenswrapper[4730]: I0320 17:03:57.647281 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zs57x/crc-debug-r2nsh" event={"ID":"9b604a34-be8f-425a-aef6-c3e9581035d7","Type":"ContainerStarted","Data":"c74b2375ab93ace2ca445822106d4ea1bc9f5071d7fa4bd714394b0009c98bde"} Mar 20 17:03:57 crc kubenswrapper[4730]: I0320 17:03:57.676529 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zs57x/crc-debug-r2nsh" podStartSLOduration=1.6765088110000002 podStartE2EDuration="1.676508811s" podCreationTimestamp="2026-03-20 17:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 17:03:57.660394692 +0000 UTC m=+5096.873766071" watchObservedRunningTime="2026-03-20 17:03:57.676508811 +0000 UTC m=+5096.889880180" Mar 20 17:03:58 crc kubenswrapper[4730]: I0320 17:03:58.658630 4730 generic.go:334] "Generic (PLEG): container finished" podID="9b604a34-be8f-425a-aef6-c3e9581035d7" containerID="f2a5b11498b3565ac584ae0844e2ddcd32e123d95c3d6fe4c6c0d6653ce556a9" exitCode=0 Mar 20 17:03:58 crc kubenswrapper[4730]: I0320 17:03:58.658678 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zs57x/crc-debug-r2nsh" event={"ID":"9b604a34-be8f-425a-aef6-c3e9581035d7","Type":"ContainerDied","Data":"f2a5b11498b3565ac584ae0844e2ddcd32e123d95c3d6fe4c6c0d6653ce556a9"} Mar 20 17:03:59 crc kubenswrapper[4730]: I0320 17:03:59.786845 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zs57x/crc-debug-r2nsh" Mar 20 17:03:59 crc kubenswrapper[4730]: I0320 17:03:59.821835 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zs57x/crc-debug-r2nsh"] Mar 20 17:03:59 crc kubenswrapper[4730]: I0320 17:03:59.831565 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zs57x/crc-debug-r2nsh"] Mar 20 17:03:59 crc kubenswrapper[4730]: I0320 17:03:59.922768 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b604a34-be8f-425a-aef6-c3e9581035d7-host\") pod \"9b604a34-be8f-425a-aef6-c3e9581035d7\" (UID: \"9b604a34-be8f-425a-aef6-c3e9581035d7\") " Mar 20 17:03:59 crc kubenswrapper[4730]: I0320 17:03:59.922867 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b604a34-be8f-425a-aef6-c3e9581035d7-host" (OuterVolumeSpecName: "host") pod "9b604a34-be8f-425a-aef6-c3e9581035d7" (UID: "9b604a34-be8f-425a-aef6-c3e9581035d7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 17:03:59 crc kubenswrapper[4730]: I0320 17:03:59.922902 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j5cn\" (UniqueName: \"kubernetes.io/projected/9b604a34-be8f-425a-aef6-c3e9581035d7-kube-api-access-2j5cn\") pod \"9b604a34-be8f-425a-aef6-c3e9581035d7\" (UID: \"9b604a34-be8f-425a-aef6-c3e9581035d7\") " Mar 20 17:03:59 crc kubenswrapper[4730]: I0320 17:03:59.923745 4730 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b604a34-be8f-425a-aef6-c3e9581035d7-host\") on node \"crc\" DevicePath \"\"" Mar 20 17:03:59 crc kubenswrapper[4730]: I0320 17:03:59.927967 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b604a34-be8f-425a-aef6-c3e9581035d7-kube-api-access-2j5cn" (OuterVolumeSpecName: "kube-api-access-2j5cn") pod "9b604a34-be8f-425a-aef6-c3e9581035d7" (UID: "9b604a34-be8f-425a-aef6-c3e9581035d7"). InnerVolumeSpecName "kube-api-access-2j5cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 17:04:00 crc kubenswrapper[4730]: I0320 17:04:00.025432 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2j5cn\" (UniqueName: \"kubernetes.io/projected/9b604a34-be8f-425a-aef6-c3e9581035d7-kube-api-access-2j5cn\") on node \"crc\" DevicePath \"\"" Mar 20 17:04:00 crc kubenswrapper[4730]: I0320 17:04:00.144845 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567104-xfdqf"] Mar 20 17:04:00 crc kubenswrapper[4730]: E0320 17:04:00.145329 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b604a34-be8f-425a-aef6-c3e9581035d7" containerName="container-00" Mar 20 17:04:00 crc kubenswrapper[4730]: I0320 17:04:00.145346 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b604a34-be8f-425a-aef6-c3e9581035d7" containerName="container-00" Mar 20 17:04:00 crc kubenswrapper[4730]: I0320 17:04:00.145589 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b604a34-be8f-425a-aef6-c3e9581035d7" containerName="container-00" Mar 20 17:04:00 crc kubenswrapper[4730]: I0320 17:04:00.146322 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567104-xfdqf" Mar 20 17:04:00 crc kubenswrapper[4730]: I0320 17:04:00.148868 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 17:04:00 crc kubenswrapper[4730]: I0320 17:04:00.149380 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 17:04:00 crc kubenswrapper[4730]: I0320 17:04:00.149660 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 17:04:00 crc kubenswrapper[4730]: I0320 17:04:00.154683 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567104-xfdqf"] Mar 20 17:04:00 crc kubenswrapper[4730]: I0320 17:04:00.230461 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxl6s\" (UniqueName: \"kubernetes.io/projected/cb4bb26e-8780-490e-b7d4-5068d41079d5-kube-api-access-fxl6s\") pod \"auto-csr-approver-29567104-xfdqf\" (UID: \"cb4bb26e-8780-490e-b7d4-5068d41079d5\") " pod="openshift-infra/auto-csr-approver-29567104-xfdqf" Mar 20 17:04:00 crc kubenswrapper[4730]: I0320 17:04:00.331731 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxl6s\" (UniqueName: \"kubernetes.io/projected/cb4bb26e-8780-490e-b7d4-5068d41079d5-kube-api-access-fxl6s\") pod \"auto-csr-approver-29567104-xfdqf\" (UID: \"cb4bb26e-8780-490e-b7d4-5068d41079d5\") " pod="openshift-infra/auto-csr-approver-29567104-xfdqf" Mar 20 17:04:00 crc kubenswrapper[4730]: I0320 17:04:00.351125 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxl6s\" (UniqueName: \"kubernetes.io/projected/cb4bb26e-8780-490e-b7d4-5068d41079d5-kube-api-access-fxl6s\") pod \"auto-csr-approver-29567104-xfdqf\" (UID: \"cb4bb26e-8780-490e-b7d4-5068d41079d5\") " pod="openshift-infra/auto-csr-approver-29567104-xfdqf" Mar 20 17:04:00 crc kubenswrapper[4730]: I0320 17:04:00.499972 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567104-xfdqf" Mar 20 17:04:00 crc kubenswrapper[4730]: I0320 17:04:00.703530 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c74b2375ab93ace2ca445822106d4ea1bc9f5071d7fa4bd714394b0009c98bde" Mar 20 17:04:00 crc kubenswrapper[4730]: I0320 17:04:00.703633 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zs57x/crc-debug-r2nsh" Mar 20 17:04:01 crc kubenswrapper[4730]: I0320 17:04:01.021045 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zs57x/crc-debug-v8sjq"] Mar 20 17:04:01 crc kubenswrapper[4730]: I0320 17:04:01.024787 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zs57x/crc-debug-v8sjq" Mar 20 17:04:01 crc kubenswrapper[4730]: I0320 17:04:01.062562 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567104-xfdqf"] Mar 20 17:04:01 crc kubenswrapper[4730]: W0320 17:04:01.064219 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb4bb26e_8780_490e_b7d4_5068d41079d5.slice/crio-02919563296b440dbb8f5883abe6fd55e90bbc6eeb56409984dea21fb85e0519 WatchSource:0}: Error finding container 02919563296b440dbb8f5883abe6fd55e90bbc6eeb56409984dea21fb85e0519: Status 404 returned error can't find the container with id 02919563296b440dbb8f5883abe6fd55e90bbc6eeb56409984dea21fb85e0519 Mar 20 17:04:01 crc kubenswrapper[4730]: I0320 17:04:01.066651 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 17:04:01 crc kubenswrapper[4730]: I0320 17:04:01.149127 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d071ff87-5bd9-451c-a9f5-e23dc9dda0f8-host\") pod \"crc-debug-v8sjq\" (UID: \"d071ff87-5bd9-451c-a9f5-e23dc9dda0f8\") " pod="openshift-must-gather-zs57x/crc-debug-v8sjq" Mar 20 17:04:01 crc kubenswrapper[4730]: I0320 17:04:01.149211 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz2vg\" (UniqueName: \"kubernetes.io/projected/d071ff87-5bd9-451c-a9f5-e23dc9dda0f8-kube-api-access-fz2vg\") pod \"crc-debug-v8sjq\" (UID: \"d071ff87-5bd9-451c-a9f5-e23dc9dda0f8\") " pod="openshift-must-gather-zs57x/crc-debug-v8sjq" Mar 20 17:04:01 crc kubenswrapper[4730]: I0320 17:04:01.250709 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz2vg\" (UniqueName: \"kubernetes.io/projected/d071ff87-5bd9-451c-a9f5-e23dc9dda0f8-kube-api-access-fz2vg\") pod \"crc-debug-v8sjq\" (UID: \"d071ff87-5bd9-451c-a9f5-e23dc9dda0f8\") " pod="openshift-must-gather-zs57x/crc-debug-v8sjq" Mar 20 17:04:01 crc kubenswrapper[4730]: I0320 17:04:01.250890 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d071ff87-5bd9-451c-a9f5-e23dc9dda0f8-host\") pod \"crc-debug-v8sjq\" (UID: \"d071ff87-5bd9-451c-a9f5-e23dc9dda0f8\") " pod="openshift-must-gather-zs57x/crc-debug-v8sjq" Mar 20 17:04:01 crc kubenswrapper[4730]: I0320 17:04:01.251009 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d071ff87-5bd9-451c-a9f5-e23dc9dda0f8-host\") pod \"crc-debug-v8sjq\" (UID: \"d071ff87-5bd9-451c-a9f5-e23dc9dda0f8\") " pod="openshift-must-gather-zs57x/crc-debug-v8sjq" Mar 20 17:04:01 crc kubenswrapper[4730]: I0320 17:04:01.277815 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz2vg\" (UniqueName: \"kubernetes.io/projected/d071ff87-5bd9-451c-a9f5-e23dc9dda0f8-kube-api-access-fz2vg\") pod \"crc-debug-v8sjq\" (UID: \"d071ff87-5bd9-451c-a9f5-e23dc9dda0f8\") " pod="openshift-must-gather-zs57x/crc-debug-v8sjq" Mar 20 17:04:01 crc kubenswrapper[4730]: I0320 17:04:01.338639 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zs57x/crc-debug-v8sjq" Mar 20 17:04:01 crc kubenswrapper[4730]: W0320 17:04:01.380088 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd071ff87_5bd9_451c_a9f5_e23dc9dda0f8.slice/crio-77307fa29bb7e2a5f9baface29502c291c6437a1eda4d88b6c1a52cb6e7e8cde WatchSource:0}: Error finding container 77307fa29bb7e2a5f9baface29502c291c6437a1eda4d88b6c1a52cb6e7e8cde: Status 404 returned error can't find the container with id 77307fa29bb7e2a5f9baface29502c291c6437a1eda4d88b6c1a52cb6e7e8cde Mar 20 17:04:01 crc kubenswrapper[4730]: I0320 17:04:01.555225 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b604a34-be8f-425a-aef6-c3e9581035d7" path="/var/lib/kubelet/pods/9b604a34-be8f-425a-aef6-c3e9581035d7/volumes" Mar 20 17:04:01 crc kubenswrapper[4730]: I0320 17:04:01.713471 4730 generic.go:334] "Generic (PLEG): container finished" podID="d071ff87-5bd9-451c-a9f5-e23dc9dda0f8" containerID="1ed06aadad6b1efaa33beb6ea1265b5a0894dda501160b5c7cfc4bd8932d7469" exitCode=0 Mar 20 17:04:01 crc kubenswrapper[4730]: I0320 17:04:01.713565 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zs57x/crc-debug-v8sjq" event={"ID":"d071ff87-5bd9-451c-a9f5-e23dc9dda0f8","Type":"ContainerDied","Data":"1ed06aadad6b1efaa33beb6ea1265b5a0894dda501160b5c7cfc4bd8932d7469"} Mar 20 17:04:01 crc kubenswrapper[4730]: I0320 17:04:01.713878 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zs57x/crc-debug-v8sjq" event={"ID":"d071ff87-5bd9-451c-a9f5-e23dc9dda0f8","Type":"ContainerStarted","Data":"77307fa29bb7e2a5f9baface29502c291c6437a1eda4d88b6c1a52cb6e7e8cde"} Mar 20 17:04:01 crc kubenswrapper[4730]: I0320 17:04:01.714981 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567104-xfdqf" event={"ID":"cb4bb26e-8780-490e-b7d4-5068d41079d5","Type":"ContainerStarted","Data":"02919563296b440dbb8f5883abe6fd55e90bbc6eeb56409984dea21fb85e0519"} Mar 20 17:04:01 crc kubenswrapper[4730]: I0320 17:04:01.758296 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zs57x/crc-debug-v8sjq"] Mar 20 17:04:01 crc kubenswrapper[4730]: I0320 17:04:01.768297 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zs57x/crc-debug-v8sjq"] Mar 20 17:04:02 crc kubenswrapper[4730]: I0320 17:04:02.726698 4730 generic.go:334] "Generic (PLEG): container finished" podID="cb4bb26e-8780-490e-b7d4-5068d41079d5" containerID="570fedc27114561def00c9fd45b45e729130f1ccdb2c50a188548ae1020d9f83" exitCode=0 Mar 20 17:04:02 crc kubenswrapper[4730]: I0320 17:04:02.728605 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567104-xfdqf" event={"ID":"cb4bb26e-8780-490e-b7d4-5068d41079d5","Type":"ContainerDied","Data":"570fedc27114561def00c9fd45b45e729130f1ccdb2c50a188548ae1020d9f83"} Mar 20 17:04:02 crc kubenswrapper[4730]: I0320 17:04:02.874931 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zs57x/crc-debug-v8sjq" Mar 20 17:04:03 crc kubenswrapper[4730]: I0320 17:04:03.001627 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d071ff87-5bd9-451c-a9f5-e23dc9dda0f8-host\") pod \"d071ff87-5bd9-451c-a9f5-e23dc9dda0f8\" (UID: \"d071ff87-5bd9-451c-a9f5-e23dc9dda0f8\") " Mar 20 17:04:03 crc kubenswrapper[4730]: I0320 17:04:03.001821 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz2vg\" (UniqueName: \"kubernetes.io/projected/d071ff87-5bd9-451c-a9f5-e23dc9dda0f8-kube-api-access-fz2vg\") pod \"d071ff87-5bd9-451c-a9f5-e23dc9dda0f8\" (UID: \"d071ff87-5bd9-451c-a9f5-e23dc9dda0f8\") " Mar 20 17:04:03 crc kubenswrapper[4730]: I0320 17:04:03.002151 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d071ff87-5bd9-451c-a9f5-e23dc9dda0f8-host" (OuterVolumeSpecName: "host") pod "d071ff87-5bd9-451c-a9f5-e23dc9dda0f8" (UID: "d071ff87-5bd9-451c-a9f5-e23dc9dda0f8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 17:04:03 crc kubenswrapper[4730]: I0320 17:04:03.002519 4730 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d071ff87-5bd9-451c-a9f5-e23dc9dda0f8-host\") on node \"crc\" DevicePath \"\"" Mar 20 17:04:03 crc kubenswrapper[4730]: I0320 17:04:03.018200 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d071ff87-5bd9-451c-a9f5-e23dc9dda0f8-kube-api-access-fz2vg" (OuterVolumeSpecName: "kube-api-access-fz2vg") pod "d071ff87-5bd9-451c-a9f5-e23dc9dda0f8" (UID: "d071ff87-5bd9-451c-a9f5-e23dc9dda0f8"). InnerVolumeSpecName "kube-api-access-fz2vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 17:04:03 crc kubenswrapper[4730]: I0320 17:04:03.105254 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz2vg\" (UniqueName: \"kubernetes.io/projected/d071ff87-5bd9-451c-a9f5-e23dc9dda0f8-kube-api-access-fz2vg\") on node \"crc\" DevicePath \"\"" Mar 20 17:04:03 crc kubenswrapper[4730]: I0320 17:04:03.553868 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d071ff87-5bd9-451c-a9f5-e23dc9dda0f8" path="/var/lib/kubelet/pods/d071ff87-5bd9-451c-a9f5-e23dc9dda0f8/volumes" Mar 20 17:04:03 crc kubenswrapper[4730]: I0320 17:04:03.740462 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zs57x/crc-debug-v8sjq" Mar 20 17:04:03 crc kubenswrapper[4730]: I0320 17:04:03.741296 4730 scope.go:117] "RemoveContainer" containerID="1ed06aadad6b1efaa33beb6ea1265b5a0894dda501160b5c7cfc4bd8932d7469" Mar 20 17:04:04 crc kubenswrapper[4730]: I0320 17:04:04.147953 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567104-xfdqf" Mar 20 17:04:04 crc kubenswrapper[4730]: I0320 17:04:04.336503 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxl6s\" (UniqueName: \"kubernetes.io/projected/cb4bb26e-8780-490e-b7d4-5068d41079d5-kube-api-access-fxl6s\") pod \"cb4bb26e-8780-490e-b7d4-5068d41079d5\" (UID: \"cb4bb26e-8780-490e-b7d4-5068d41079d5\") " Mar 20 17:04:04 crc kubenswrapper[4730]: I0320 17:04:04.342865 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb4bb26e-8780-490e-b7d4-5068d41079d5-kube-api-access-fxl6s" (OuterVolumeSpecName: "kube-api-access-fxl6s") pod "cb4bb26e-8780-490e-b7d4-5068d41079d5" (UID: "cb4bb26e-8780-490e-b7d4-5068d41079d5"). InnerVolumeSpecName "kube-api-access-fxl6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 17:04:04 crc kubenswrapper[4730]: I0320 17:04:04.439750 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxl6s\" (UniqueName: \"kubernetes.io/projected/cb4bb26e-8780-490e-b7d4-5068d41079d5-kube-api-access-fxl6s\") on node \"crc\" DevicePath \"\"" Mar 20 17:04:04 crc kubenswrapper[4730]: I0320 17:04:04.764653 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567104-xfdqf" event={"ID":"cb4bb26e-8780-490e-b7d4-5068d41079d5","Type":"ContainerDied","Data":"02919563296b440dbb8f5883abe6fd55e90bbc6eeb56409984dea21fb85e0519"} Mar 20 17:04:04 crc kubenswrapper[4730]: I0320 17:04:04.765047 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02919563296b440dbb8f5883abe6fd55e90bbc6eeb56409984dea21fb85e0519" Mar 20 17:04:04 crc kubenswrapper[4730]: I0320 17:04:04.765134 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567104-xfdqf" Mar 20 17:04:05 crc kubenswrapper[4730]: I0320 17:04:05.221006 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567098-4wf7k"] Mar 20 17:04:05 crc kubenswrapper[4730]: I0320 17:04:05.232080 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567098-4wf7k"] Mar 20 17:04:05 crc kubenswrapper[4730]: I0320 17:04:05.542891 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8d7ea06-69cc-41b0-afc3-fb5f3e55049b" path="/var/lib/kubelet/pods/a8d7ea06-69cc-41b0-afc3-fb5f3e55049b/volumes" Mar 20 17:04:12 crc kubenswrapper[4730]: I0320 17:04:12.879836 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 17:04:12 crc kubenswrapper[4730]: I0320 17:04:12.880437 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 17:04:29 crc kubenswrapper[4730]: I0320 17:04:29.184191 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fcd2p"] Mar 20 17:04:29 crc kubenswrapper[4730]: E0320 17:04:29.185005 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb4bb26e-8780-490e-b7d4-5068d41079d5" containerName="oc" Mar 20 17:04:29 crc kubenswrapper[4730]: I0320 17:04:29.185016 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb4bb26e-8780-490e-b7d4-5068d41079d5" containerName="oc" Mar 20 17:04:29 crc kubenswrapper[4730]: E0320 17:04:29.185048 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d071ff87-5bd9-451c-a9f5-e23dc9dda0f8" containerName="container-00" Mar 20 17:04:29 crc kubenswrapper[4730]: I0320 17:04:29.185053 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="d071ff87-5bd9-451c-a9f5-e23dc9dda0f8" containerName="container-00" Mar 20 17:04:29 crc kubenswrapper[4730]: I0320 17:04:29.185326 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="d071ff87-5bd9-451c-a9f5-e23dc9dda0f8" containerName="container-00" Mar 20 17:04:29 crc kubenswrapper[4730]: I0320 17:04:29.185346 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb4bb26e-8780-490e-b7d4-5068d41079d5" containerName="oc" Mar 20 17:04:29 crc kubenswrapper[4730]: I0320 17:04:29.186750 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fcd2p" Mar 20 17:04:29 crc kubenswrapper[4730]: I0320 17:04:29.213755 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fcd2p"] Mar 20 17:04:29 crc kubenswrapper[4730]: I0320 17:04:29.386488 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe373a9-f4c2-49e5-a90e-9e50cae59b9d-utilities\") pod \"community-operators-fcd2p\" (UID: \"bfe373a9-f4c2-49e5-a90e-9e50cae59b9d\") " pod="openshift-marketplace/community-operators-fcd2p" Mar 20 17:04:29 crc kubenswrapper[4730]: I0320 17:04:29.386547 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw8xq\" (UniqueName: \"kubernetes.io/projected/bfe373a9-f4c2-49e5-a90e-9e50cae59b9d-kube-api-access-sw8xq\") pod \"community-operators-fcd2p\" (UID: \"bfe373a9-f4c2-49e5-a90e-9e50cae59b9d\") " pod="openshift-marketplace/community-operators-fcd2p" Mar 20 17:04:29 crc kubenswrapper[4730]: I0320 17:04:29.386645 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe373a9-f4c2-49e5-a90e-9e50cae59b9d-catalog-content\") pod \"community-operators-fcd2p\" (UID: \"bfe373a9-f4c2-49e5-a90e-9e50cae59b9d\") " pod="openshift-marketplace/community-operators-fcd2p" Mar 20 17:04:29 crc kubenswrapper[4730]: I0320 17:04:29.488430 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe373a9-f4c2-49e5-a90e-9e50cae59b9d-catalog-content\") pod \"community-operators-fcd2p\" (UID: \"bfe373a9-f4c2-49e5-a90e-9e50cae59b9d\") " pod="openshift-marketplace/community-operators-fcd2p" Mar 20 17:04:29 crc kubenswrapper[4730]: I0320 17:04:29.488543 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe373a9-f4c2-49e5-a90e-9e50cae59b9d-utilities\") pod \"community-operators-fcd2p\" (UID: \"bfe373a9-f4c2-49e5-a90e-9e50cae59b9d\") " pod="openshift-marketplace/community-operators-fcd2p" Mar 20 17:04:29 crc kubenswrapper[4730]: I0320 17:04:29.488583 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw8xq\" (UniqueName: \"kubernetes.io/projected/bfe373a9-f4c2-49e5-a90e-9e50cae59b9d-kube-api-access-sw8xq\") pod \"community-operators-fcd2p\" (UID: \"bfe373a9-f4c2-49e5-a90e-9e50cae59b9d\") " pod="openshift-marketplace/community-operators-fcd2p" Mar 20 17:04:29 crc kubenswrapper[4730]: I0320 17:04:29.489084 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe373a9-f4c2-49e5-a90e-9e50cae59b9d-catalog-content\") pod \"community-operators-fcd2p\" (UID: \"bfe373a9-f4c2-49e5-a90e-9e50cae59b9d\") " pod="openshift-marketplace/community-operators-fcd2p" Mar 20 17:04:29 crc kubenswrapper[4730]: I0320 17:04:29.489099 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe373a9-f4c2-49e5-a90e-9e50cae59b9d-utilities\") pod \"community-operators-fcd2p\" (UID: \"bfe373a9-f4c2-49e5-a90e-9e50cae59b9d\") " pod="openshift-marketplace/community-operators-fcd2p" Mar 20 17:04:29 crc kubenswrapper[4730]: I0320 17:04:29.515675 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw8xq\" (UniqueName: \"kubernetes.io/projected/bfe373a9-f4c2-49e5-a90e-9e50cae59b9d-kube-api-access-sw8xq\") pod \"community-operators-fcd2p\" (UID: \"bfe373a9-f4c2-49e5-a90e-9e50cae59b9d\") " pod="openshift-marketplace/community-operators-fcd2p" Mar 20 17:04:29 crc kubenswrapper[4730]: I0320 17:04:29.804342 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fcd2p" Mar 20 17:04:30 crc kubenswrapper[4730]: I0320 17:04:30.267555 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fcd2p"] Mar 20 17:04:31 crc kubenswrapper[4730]: I0320 17:04:31.051094 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fcd2p" event={"ID":"bfe373a9-f4c2-49e5-a90e-9e50cae59b9d","Type":"ContainerStarted","Data":"8cabdbdc57eeb19551cd55bb8a1050f15d88aad3e277347b9124a8a7e866177e"} Mar 20 17:04:32 crc kubenswrapper[4730]: I0320 17:04:32.060708 4730 generic.go:334] "Generic (PLEG): container finished" podID="bfe373a9-f4c2-49e5-a90e-9e50cae59b9d" containerID="59aa36d9c92972c8823e6afd5423c8f1fd3bae685e916c413b590cf24dcc5d7f" exitCode=0 Mar 20 17:04:32 crc kubenswrapper[4730]: I0320 17:04:32.060818 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fcd2p" event={"ID":"bfe373a9-f4c2-49e5-a90e-9e50cae59b9d","Type":"ContainerDied","Data":"59aa36d9c92972c8823e6afd5423c8f1fd3bae685e916c413b590cf24dcc5d7f"} Mar 20 17:04:34 crc kubenswrapper[4730]: I0320 17:04:34.082157 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fcd2p" event={"ID":"bfe373a9-f4c2-49e5-a90e-9e50cae59b9d","Type":"ContainerStarted","Data":"96aab3d66b9ef5da28cba835dbe221b8f36436bf1099b8ede616d1131a49af7b"} Mar 20 17:04:35 crc kubenswrapper[4730]: I0320 17:04:35.093376 4730 generic.go:334] "Generic (PLEG): container finished" podID="bfe373a9-f4c2-49e5-a90e-9e50cae59b9d" containerID="96aab3d66b9ef5da28cba835dbe221b8f36436bf1099b8ede616d1131a49af7b" exitCode=0 Mar 20 17:04:35 crc kubenswrapper[4730]: I0320 17:04:35.093430 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fcd2p" event={"ID":"bfe373a9-f4c2-49e5-a90e-9e50cae59b9d","Type":"ContainerDied","Data":"96aab3d66b9ef5da28cba835dbe221b8f36436bf1099b8ede616d1131a49af7b"} Mar 20 17:04:36 crc kubenswrapper[4730]: I0320 17:04:36.105565 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fcd2p" event={"ID":"bfe373a9-f4c2-49e5-a90e-9e50cae59b9d","Type":"ContainerStarted","Data":"537fe8bc77ad4b9ecb53781873569e461fd7df07fd39d0f5686d297ff949b58f"} Mar 20 17:04:36 crc kubenswrapper[4730]: I0320 17:04:36.129238 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fcd2p" podStartSLOduration=3.497064389 podStartE2EDuration="7.129215173s" podCreationTimestamp="2026-03-20 17:04:29 +0000 UTC" firstStartedPulling="2026-03-20 17:04:32.062453298 +0000 UTC m=+5131.275824667" lastFinishedPulling="2026-03-20 17:04:35.694604042 +0000 UTC m=+5134.907975451" observedRunningTime="2026-03-20 17:04:36.121997597 +0000 UTC m=+5135.335369006" watchObservedRunningTime="2026-03-20 17:04:36.129215173 +0000 UTC m=+5135.342586572" Mar 20 17:04:39 crc kubenswrapper[4730]: I0320 17:04:39.806079 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fcd2p" Mar 20 17:04:39 crc kubenswrapper[4730]: I0320 17:04:39.807223 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fcd2p" Mar 20 17:04:39 crc kubenswrapper[4730]: I0320 17:04:39.894184 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fcd2p" Mar 20 17:04:40 crc kubenswrapper[4730]: I0320 17:04:40.248858 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fcd2p" Mar 20 17:04:40 crc kubenswrapper[4730]: I0320 17:04:40.331474 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fcd2p"] Mar 20 17:04:42 crc kubenswrapper[4730]: I0320 17:04:42.179473 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fcd2p" podUID="bfe373a9-f4c2-49e5-a90e-9e50cae59b9d" containerName="registry-server" containerID="cri-o://537fe8bc77ad4b9ecb53781873569e461fd7df07fd39d0f5686d297ff949b58f" gracePeriod=2 Mar 20 17:04:42 crc kubenswrapper[4730]: I0320 17:04:42.651074 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fcd2p" Mar 20 17:04:42 crc kubenswrapper[4730]: I0320 17:04:42.785999 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe373a9-f4c2-49e5-a90e-9e50cae59b9d-catalog-content\") pod \"bfe373a9-f4c2-49e5-a90e-9e50cae59b9d\" (UID: \"bfe373a9-f4c2-49e5-a90e-9e50cae59b9d\") " Mar 20 17:04:42 crc kubenswrapper[4730]: I0320 17:04:42.786170 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw8xq\" (UniqueName: \"kubernetes.io/projected/bfe373a9-f4c2-49e5-a90e-9e50cae59b9d-kube-api-access-sw8xq\") pod \"bfe373a9-f4c2-49e5-a90e-9e50cae59b9d\" (UID: \"bfe373a9-f4c2-49e5-a90e-9e50cae59b9d\") " Mar 20 17:04:42 crc kubenswrapper[4730]: I0320 17:04:42.786237 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe373a9-f4c2-49e5-a90e-9e50cae59b9d-utilities\") pod \"bfe373a9-f4c2-49e5-a90e-9e50cae59b9d\" (UID: \"bfe373a9-f4c2-49e5-a90e-9e50cae59b9d\") " Mar 20 17:04:42 crc kubenswrapper[4730]: I0320 17:04:42.788301 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfe373a9-f4c2-49e5-a90e-9e50cae59b9d-utilities" (OuterVolumeSpecName: "utilities") pod "bfe373a9-f4c2-49e5-a90e-9e50cae59b9d" (UID: "bfe373a9-f4c2-49e5-a90e-9e50cae59b9d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 17:04:42 crc kubenswrapper[4730]: I0320 17:04:42.795529 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfe373a9-f4c2-49e5-a90e-9e50cae59b9d-kube-api-access-sw8xq" (OuterVolumeSpecName: "kube-api-access-sw8xq") pod "bfe373a9-f4c2-49e5-a90e-9e50cae59b9d" (UID: "bfe373a9-f4c2-49e5-a90e-9e50cae59b9d"). InnerVolumeSpecName "kube-api-access-sw8xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 17:04:42 crc kubenswrapper[4730]: I0320 17:04:42.866415 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfe373a9-f4c2-49e5-a90e-9e50cae59b9d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfe373a9-f4c2-49e5-a90e-9e50cae59b9d" (UID: "bfe373a9-f4c2-49e5-a90e-9e50cae59b9d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 17:04:42 crc kubenswrapper[4730]: I0320 17:04:42.879961 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 17:04:42 crc kubenswrapper[4730]: I0320 17:04:42.880025 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 17:04:42 crc kubenswrapper[4730]: I0320 17:04:42.888824 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe373a9-f4c2-49e5-a90e-9e50cae59b9d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 17:04:42 crc kubenswrapper[4730]: I0320 17:04:42.888879 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw8xq\" (UniqueName: \"kubernetes.io/projected/bfe373a9-f4c2-49e5-a90e-9e50cae59b9d-kube-api-access-sw8xq\") on node \"crc\" DevicePath \"\"" Mar 20 17:04:42 crc kubenswrapper[4730]: I0320 17:04:42.888896 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe373a9-f4c2-49e5-a90e-9e50cae59b9d-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 17:04:43 crc kubenswrapper[4730]: I0320 17:04:43.192895 4730 generic.go:334] "Generic (PLEG): container finished" podID="bfe373a9-f4c2-49e5-a90e-9e50cae59b9d" containerID="537fe8bc77ad4b9ecb53781873569e461fd7df07fd39d0f5686d297ff949b58f" exitCode=0 Mar 20 17:04:43 crc kubenswrapper[4730]: I0320 17:04:43.192940 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fcd2p" event={"ID":"bfe373a9-f4c2-49e5-a90e-9e50cae59b9d","Type":"ContainerDied","Data":"537fe8bc77ad4b9ecb53781873569e461fd7df07fd39d0f5686d297ff949b58f"} Mar 20 17:04:43 crc kubenswrapper[4730]: I0320 17:04:43.192972 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fcd2p" event={"ID":"bfe373a9-f4c2-49e5-a90e-9e50cae59b9d","Type":"ContainerDied","Data":"8cabdbdc57eeb19551cd55bb8a1050f15d88aad3e277347b9124a8a7e866177e"} Mar 20 17:04:43 crc kubenswrapper[4730]: I0320 17:04:43.192989 4730 scope.go:117] "RemoveContainer" containerID="537fe8bc77ad4b9ecb53781873569e461fd7df07fd39d0f5686d297ff949b58f" Mar 20 17:04:43 crc kubenswrapper[4730]: I0320 17:04:43.193031 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fcd2p" Mar 20 17:04:43 crc kubenswrapper[4730]: I0320 17:04:43.229332 4730 scope.go:117] "RemoveContainer" containerID="96aab3d66b9ef5da28cba835dbe221b8f36436bf1099b8ede616d1131a49af7b" Mar 20 17:04:43 crc kubenswrapper[4730]: I0320 17:04:43.251918 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fcd2p"] Mar 20 17:04:43 crc kubenswrapper[4730]: I0320 17:04:43.260816 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fcd2p"] Mar 20 17:04:43 crc kubenswrapper[4730]: I0320 17:04:43.420954 4730 scope.go:117] "RemoveContainer" containerID="59aa36d9c92972c8823e6afd5423c8f1fd3bae685e916c413b590cf24dcc5d7f" Mar 20 17:04:43 crc kubenswrapper[4730]: I0320 17:04:43.467058 4730 scope.go:117] "RemoveContainer" containerID="537fe8bc77ad4b9ecb53781873569e461fd7df07fd39d0f5686d297ff949b58f" Mar 20 17:04:43 crc kubenswrapper[4730]: E0320 17:04:43.467733 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"537fe8bc77ad4b9ecb53781873569e461fd7df07fd39d0f5686d297ff949b58f\": container with ID starting with 537fe8bc77ad4b9ecb53781873569e461fd7df07fd39d0f5686d297ff949b58f not found: ID does not exist" containerID="537fe8bc77ad4b9ecb53781873569e461fd7df07fd39d0f5686d297ff949b58f" Mar 20 17:04:43 crc kubenswrapper[4730]: I0320 17:04:43.467795 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"537fe8bc77ad4b9ecb53781873569e461fd7df07fd39d0f5686d297ff949b58f"} err="failed to get container status \"537fe8bc77ad4b9ecb53781873569e461fd7df07fd39d0f5686d297ff949b58f\": rpc error: code = NotFound desc = could not find container \"537fe8bc77ad4b9ecb53781873569e461fd7df07fd39d0f5686d297ff949b58f\": container with ID starting with 537fe8bc77ad4b9ecb53781873569e461fd7df07fd39d0f5686d297ff949b58f not found: ID does not exist" Mar 20 17:04:43 crc kubenswrapper[4730]: I0320 17:04:43.467834 4730 scope.go:117] "RemoveContainer" containerID="96aab3d66b9ef5da28cba835dbe221b8f36436bf1099b8ede616d1131a49af7b" Mar 20 17:04:43 crc kubenswrapper[4730]: E0320 17:04:43.468355 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96aab3d66b9ef5da28cba835dbe221b8f36436bf1099b8ede616d1131a49af7b\": container with ID starting with 96aab3d66b9ef5da28cba835dbe221b8f36436bf1099b8ede616d1131a49af7b not found: ID does not exist" containerID="96aab3d66b9ef5da28cba835dbe221b8f36436bf1099b8ede616d1131a49af7b" Mar 20 17:04:43 crc kubenswrapper[4730]: I0320 17:04:43.468429 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96aab3d66b9ef5da28cba835dbe221b8f36436bf1099b8ede616d1131a49af7b"} err="failed to get container status \"96aab3d66b9ef5da28cba835dbe221b8f36436bf1099b8ede616d1131a49af7b\": rpc error: code = NotFound desc = could not find container \"96aab3d66b9ef5da28cba835dbe221b8f36436bf1099b8ede616d1131a49af7b\": container with ID starting with 96aab3d66b9ef5da28cba835dbe221b8f36436bf1099b8ede616d1131a49af7b not found: ID does not exist" Mar 20 17:04:43 crc kubenswrapper[4730]: I0320 17:04:43.468491 4730 scope.go:117] "RemoveContainer" containerID="59aa36d9c92972c8823e6afd5423c8f1fd3bae685e916c413b590cf24dcc5d7f" Mar 20 17:04:43 crc kubenswrapper[4730]: E0320 17:04:43.470595 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59aa36d9c92972c8823e6afd5423c8f1fd3bae685e916c413b590cf24dcc5d7f\": container with ID starting with 59aa36d9c92972c8823e6afd5423c8f1fd3bae685e916c413b590cf24dcc5d7f not found: ID does not exist" containerID="59aa36d9c92972c8823e6afd5423c8f1fd3bae685e916c413b590cf24dcc5d7f" Mar 20 17:04:43 crc kubenswrapper[4730]: I0320 17:04:43.470641 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59aa36d9c92972c8823e6afd5423c8f1fd3bae685e916c413b590cf24dcc5d7f"} err="failed to get container status \"59aa36d9c92972c8823e6afd5423c8f1fd3bae685e916c413b590cf24dcc5d7f\": rpc error: code = NotFound desc = could not find container \"59aa36d9c92972c8823e6afd5423c8f1fd3bae685e916c413b590cf24dcc5d7f\": container with ID starting with 59aa36d9c92972c8823e6afd5423c8f1fd3bae685e916c413b590cf24dcc5d7f not found: ID does not exist" Mar 20 17:04:43 crc kubenswrapper[4730]: I0320 17:04:43.550159 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfe373a9-f4c2-49e5-a90e-9e50cae59b9d" path="/var/lib/kubelet/pods/bfe373a9-f4c2-49e5-a90e-9e50cae59b9d/volumes" Mar 20 17:04:47 crc kubenswrapper[4730]: I0320 17:04:47.990731 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-86947bcbc8-94hl8_59460a49-c9fe-46c9-b898-d08234ca7cd3/barbican-api/0.log" Mar 20 17:04:48 crc kubenswrapper[4730]: I0320 17:04:48.115360 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-86947bcbc8-94hl8_59460a49-c9fe-46c9-b898-d08234ca7cd3/barbican-api-log/0.log" Mar 20 17:04:48 crc kubenswrapper[4730]: I0320 17:04:48.222592 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-86bc9f54b4-6szxq_e4dfee88-47ff-4e8b-9f46-60cc17fb0080/barbican-keystone-listener/0.log" Mar 20 17:04:48 crc kubenswrapper[4730]: I0320 17:04:48.282038 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-86bc9f54b4-6szxq_e4dfee88-47ff-4e8b-9f46-60cc17fb0080/barbican-keystone-listener-log/0.log" Mar 20 17:04:48 crc kubenswrapper[4730]: I0320 17:04:48.399360 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-54b9958865-vn9kj_8f8c40f6-c8d3-4c8c-97eb-643d32774174/barbican-worker/0.log" Mar 20 17:04:48 crc kubenswrapper[4730]: I0320 17:04:48.429312 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-54b9958865-vn9kj_8f8c40f6-c8d3-4c8c-97eb-643d32774174/barbican-worker-log/0.log" Mar 20 17:04:48 crc kubenswrapper[4730]: I0320 17:04:48.753297 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2333c0f3-d6ce-405f-b8c8-755be42ba74b/ceilometer-central-agent/0.log" Mar 20 17:04:48 crc kubenswrapper[4730]: I0320 17:04:48.815382 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2333c0f3-d6ce-405f-b8c8-755be42ba74b/ceilometer-notification-agent/0.log" Mar 20 17:04:48 crc kubenswrapper[4730]: I0320 17:04:48.924444 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2333c0f3-d6ce-405f-b8c8-755be42ba74b/proxy-httpd/0.log" Mar 20 17:04:48 crc kubenswrapper[4730]: I0320 17:04:48.929290 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-ppqdh_73c1c649-4459-497e-ba5b-245a4eb5ad04/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Mar 20 17:04:49 crc kubenswrapper[4730]: I0320 17:04:49.039836 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2333c0f3-d6ce-405f-b8c8-755be42ba74b/sg-core/0.log" Mar 20 17:04:49 crc kubenswrapper[4730]: I0320 17:04:49.175920 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa/cinder-api-log/0.log" Mar 20 17:04:49 crc kubenswrapper[4730]: I0320 17:04:49.691940 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_0bde1710-3861-42cb-8647-292785ee4392/probe/0.log" Mar 20 17:04:49 crc kubenswrapper[4730]: I0320 17:04:49.834518 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_4ae2fc19-770e-471e-a3d4-f0f8c08f4eaa/cinder-api/0.log" Mar 20 17:04:49 crc kubenswrapper[4730]: I0320 17:04:49.952049 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8ff07e31-53ad-49da-941d-607115f965e0/cinder-scheduler/0.log" Mar 20 17:04:50 crc kubenswrapper[4730]: I0320 17:04:50.027873 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_0bde1710-3861-42cb-8647-292785ee4392/cinder-backup/0.log" Mar 20 17:04:50 crc kubenswrapper[4730]: I0320 17:04:50.079118 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8ff07e31-53ad-49da-941d-607115f965e0/probe/0.log" Mar 20 17:04:50 crc kubenswrapper[4730]: I0320 17:04:50.277117 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_436a7a40-7823-4670-a107-ff5ca02da822/probe/0.log" Mar 20 17:04:50 crc kubenswrapper[4730]: I0320 17:04:50.552064 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_20675030-52b7-4f1d-b087-d7703a59f5e1/probe/0.log" Mar 20 17:04:50 crc kubenswrapper[4730]: I0320 17:04:50.603538 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_436a7a40-7823-4670-a107-ff5ca02da822/cinder-volume/0.log" Mar 20 17:04:50 crc kubenswrapper[4730]: I0320 17:04:50.727299 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_20675030-52b7-4f1d-b087-d7703a59f5e1/cinder-volume/0.log" Mar 20 17:04:51 crc kubenswrapper[4730]: I0320 17:04:51.012112 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-dff2j_ca62ee94-4983-4acc-856a-3faf59cae3e1/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 20 17:04:51 crc kubenswrapper[4730]: I0320 17:04:51.085371 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-m89bj_a8c27e63-ebf9-45ff-87b2-4782b20e19e3/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 20 17:04:51 crc kubenswrapper[4730]: I0320 17:04:51.523597 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-9449c877-vxfrw_4139a04b-4804-475f-9da3-6c40dad56690/init/0.log" Mar 20 17:04:51 crc kubenswrapper[4730]: I0320 17:04:51.724580 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-9449c877-vxfrw_4139a04b-4804-475f-9da3-6c40dad56690/init/0.log" Mar 20 17:04:51 crc kubenswrapper[4730]: I0320 17:04:51.824660 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-9449c877-vxfrw_4139a04b-4804-475f-9da3-6c40dad56690/dnsmasq-dns/0.log" Mar 20 17:04:52 crc kubenswrapper[4730]: I0320 17:04:52.048476 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-9p7mz_962231f7-41b6-4754-b63c-523277f7cf50/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Mar 20 17:04:52 crc kubenswrapper[4730]: I0320 17:04:52.063155 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_47ed5bd7-7aa8-4f16-98de-f09e21218ae6/glance-httpd/0.log" Mar 20 17:04:52 crc kubenswrapper[4730]: I0320 17:04:52.074054 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_47ed5bd7-7aa8-4f16-98de-f09e21218ae6/glance-log/0.log" Mar 20 17:04:52 crc kubenswrapper[4730]: I0320 17:04:52.222101 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_84366eea-e5f9-43da-ac65-8e79cb659c0a/glance-httpd/0.log" Mar 20 17:04:52 crc kubenswrapper[4730]: I0320 17:04:52.243915 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_84366eea-e5f9-43da-ac65-8e79cb659c0a/glance-log/0.log" Mar 20 17:04:52 crc kubenswrapper[4730]: I0320 17:04:52.388466 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-4d78n_423144fa-9b01-4466-993c-6ab7075e1ad5/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Mar 20 17:04:52 crc kubenswrapper[4730]: I0320 17:04:52.984954 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29567041-zmx9n_d3747d18-1b1e-4c43-ac1a-efeeb453b1ae/keystone-cron/0.log" Mar 20 17:04:53 crc kubenswrapper[4730]: I0320 17:04:53.050214 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-dkksx_133f1969-bed7-44cd-9dac-b9dfaa376515/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 20 17:04:53 crc kubenswrapper[4730]: I0320 17:04:53.168840 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29567101-lzlmb_8a113133-c537-41c7-a14e-614fb8bcd24f/keystone-cron/0.log" Mar 20 17:04:53 crc kubenswrapper[4730]: I0320 17:04:53.202518 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6fb7949f77-2l9t7_e2b9f0c5-80cc-4a4c-bbd8-c70cda9d5d3d/keystone-api/0.log" Mar 20 17:04:53 crc kubenswrapper[4730]: I0320 17:04:53.285102 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_2455b53b-7716-45b9-ac24-cd0bd892fbb9/kube-state-metrics/0.log" Mar 20 17:04:53 crc kubenswrapper[4730]: I0320 17:04:53.896209 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5dc7dd859f-wtxnj_62339bcb-2edc-4881-a15e-a9387442db89/neutron-api/0.log" Mar 20 17:04:53 crc kubenswrapper[4730]: I0320 17:04:53.931607 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5dc7dd859f-wtxnj_62339bcb-2edc-4881-a15e-a9387442db89/neutron-httpd/0.log" Mar 20 17:04:54 crc kubenswrapper[4730]: I0320 17:04:54.073213 4730 scope.go:117] "RemoveContainer" containerID="7caef3ce3ae643ea42e6304dc53d81d43ac8e7cc2d51fc8c56b8771cdad2f656" Mar 20 17:04:54 crc kubenswrapper[4730]: I0320 17:04:54.183966 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_df9ca02d-e20f-4f55-ba14-92b91812afb6/setup-container/0.log" Mar 20 17:04:54 crc kubenswrapper[4730]: I0320 17:04:54.286355 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-n8bcw_74d70014-6de1-4d90-b04a-8f8376d3a9e0/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Mar 20 17:04:54 crc kubenswrapper[4730]: I0320 17:04:54.375493 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_df9ca02d-e20f-4f55-ba14-92b91812afb6/setup-container/0.log" Mar 20 17:04:54 crc kubenswrapper[4730]: I0320 17:04:54.487931 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_df9ca02d-e20f-4f55-ba14-92b91812afb6/rabbitmq/0.log" Mar 20 17:04:54 crc kubenswrapper[4730]: I0320 17:04:54.664804 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-z4vhj_43d453db-c8fb-438d-927e-6eaee8383df1/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Mar 20 17:04:55 crc kubenswrapper[4730]: I0320 17:04:55.108983 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_0940bcf4-b3ca-4f1d-92df-5fa9f477c800/nova-cell0-conductor-conductor/0.log" Mar 20 17:04:55 crc kubenswrapper[4730]: I0320 17:04:55.381679 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_e788dfbe-bc18-46f9-b2bf-674940e1c392/nova-cell1-conductor-conductor/0.log" Mar 20 17:04:55 crc kubenswrapper[4730]: I0320 17:04:55.686558 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_6586493e-e5d0-4504-b516-ebaac5defd79/nova-cell1-novncproxy-novncproxy/0.log" Mar 20 17:04:55 crc kubenswrapper[4730]: I0320 17:04:55.815212 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2380321d-63e0-40a2-8ca4-5780cba46259/nova-api-log/0.log" Mar 20 17:04:56 crc kubenswrapper[4730]: I0320 17:04:56.244393 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a58f453e-84d8-47b1-8740-406f92c4ca79/nova-metadata-log/0.log" Mar 20 17:04:56 crc kubenswrapper[4730]: I0320 17:04:56.433150 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2380321d-63e0-40a2-8ca4-5780cba46259/nova-api-api/0.log" Mar 20 17:04:56 crc kubenswrapper[4730]: I0320 17:04:56.880495 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_899bd9ae-9354-4e70-ad37-b438a5a33a24/mysql-bootstrap/0.log" Mar 20 17:04:56 crc kubenswrapper[4730]: I0320 17:04:56.943295 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_4deff063-ecb8-4cf2-8e94-45ab62a613bc/nova-scheduler-scheduler/0.log" Mar 20 17:04:57 crc kubenswrapper[4730]: I0320 17:04:57.068194 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_899bd9ae-9354-4e70-ad37-b438a5a33a24/mysql-bootstrap/0.log" Mar 20 17:04:57 crc kubenswrapper[4730]: I0320 17:04:57.072132 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a58f453e-84d8-47b1-8740-406f92c4ca79/nova-metadata-metadata/0.log" Mar 20 17:04:57 crc kubenswrapper[4730]: I0320 17:04:57.127146 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_899bd9ae-9354-4e70-ad37-b438a5a33a24/galera/0.log" Mar 20 17:04:57 crc kubenswrapper[4730]: I0320 17:04:57.214147 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-x4t58_6ffb462f-06f9-49df-bfe7-d41c274d4b05/nova-edpm-deployment-openstack-edpm-ipam/0.log" Mar 20 17:04:57 crc kubenswrapper[4730]: I0320 17:04:57.242949 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6abf778f-200f-4d48-97b6-08a638b4efa2/mysql-bootstrap/0.log" Mar 20 17:04:57 crc kubenswrapper[4730]: I0320 17:04:57.498210 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6abf778f-200f-4d48-97b6-08a638b4efa2/mysql-bootstrap/0.log" Mar 20 17:04:57 crc kubenswrapper[4730]: I0320 17:04:57.505903 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6abf778f-200f-4d48-97b6-08a638b4efa2/galera/0.log" Mar 20 17:04:57 crc kubenswrapper[4730]: I0320 17:04:57.540938 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a893eba7-9715-4599-93c2-0365a45134e9/openstackclient/0.log" Mar 20 17:04:57 crc kubenswrapper[4730]: I0320 17:04:57.703897 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-gtrnp_31651551-edb9-4793-a752-39fa60a85ee3/ovn-controller/0.log" Mar 20 17:04:57 crc kubenswrapper[4730]: I0320 17:04:57.751061 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-ktnvd_c615e3c6-d705-46e8-a1e7-c1c86df055f5/openstack-network-exporter/0.log" Mar 20 17:04:57 crc kubenswrapper[4730]: I0320 17:04:57.948615 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cdd7f_35efb2c2-6521-4f6f-a350-a4dc537ecaf8/ovsdb-server-init/0.log" Mar 20 17:04:58 crc kubenswrapper[4730]: I0320 17:04:58.157347 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cdd7f_35efb2c2-6521-4f6f-a350-a4dc537ecaf8/ovsdb-server-init/0.log" Mar 20 17:04:58 crc kubenswrapper[4730]: I0320 17:04:58.179037 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cdd7f_35efb2c2-6521-4f6f-a350-a4dc537ecaf8/ovsdb-server/0.log" Mar 20 17:04:58 crc kubenswrapper[4730]: I0320 17:04:58.422464 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4cb9ef9a-6d98-43c1-8e74-7f24ba39357d/openstack-network-exporter/0.log" Mar 20 17:04:58 crc kubenswrapper[4730]: I0320 17:04:58.525840 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cdd7f_35efb2c2-6521-4f6f-a350-a4dc537ecaf8/ovs-vswitchd/0.log" Mar 20 17:04:58 crc kubenswrapper[4730]: I0320 17:04:58.551522 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4cb9ef9a-6d98-43c1-8e74-7f24ba39357d/ovn-northd/0.log" Mar 20 17:04:58 crc kubenswrapper[4730]: I0320 17:04:58.566198 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-fxwgt_efd41cb9-678e-43d9-8643-b5aa95f1ec3e/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Mar 20 17:04:58 crc kubenswrapper[4730]: I0320 17:04:58.763479 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_caa1db28-afc0-4abc-aa80-84cccb3d8412/ovsdbserver-nb/0.log" Mar 20 17:04:58 crc kubenswrapper[4730]: I0320 17:04:58.769832 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_caa1db28-afc0-4abc-aa80-84cccb3d8412/openstack-network-exporter/0.log" Mar 20 17:04:58 crc kubenswrapper[4730]: I0320 17:04:58.925872 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1ba8c36f-1882-4bb3-bcb5-b3518ce35553/openstack-network-exporter/0.log" Mar 20 17:04:59 crc kubenswrapper[4730]: I0320 17:04:59.011972 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1ba8c36f-1882-4bb3-bcb5-b3518ce35553/ovsdbserver-sb/0.log" Mar 20 17:04:59 crc kubenswrapper[4730]: I0320 17:04:59.290653 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_457a736d-6c3f-486d-b8d1-fef19df33e26/init-config-reloader/0.log" Mar 20 17:04:59 crc kubenswrapper[4730]: I0320 17:04:59.369895 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-78b446cdb6-zs6nw_d2885c5d-681f-4e22-bdeb-b716957d83e1/placement-api/0.log" Mar 20 17:04:59 crc kubenswrapper[4730]: I0320 17:04:59.373511 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-78b446cdb6-zs6nw_d2885c5d-681f-4e22-bdeb-b716957d83e1/placement-log/0.log" Mar 20 17:04:59 crc kubenswrapper[4730]: I0320 17:04:59.550956 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_457a736d-6c3f-486d-b8d1-fef19df33e26/init-config-reloader/0.log" Mar 20 17:04:59 crc kubenswrapper[4730]: I0320 17:04:59.607156 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_457a736d-6c3f-486d-b8d1-fef19df33e26/config-reloader/0.log" Mar 20 17:04:59 crc kubenswrapper[4730]: I0320 17:04:59.650723 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_457a736d-6c3f-486d-b8d1-fef19df33e26/prometheus/0.log" Mar 20 17:04:59 crc kubenswrapper[4730]: I0320 17:04:59.694537 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_457a736d-6c3f-486d-b8d1-fef19df33e26/thanos-sidecar/0.log" Mar 20 17:04:59 crc kubenswrapper[4730]: I0320 17:04:59.816524 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b92f799a-be4e-45a1-9e2e-c93c4992c9ce/setup-container/0.log" Mar 20 17:05:00 crc kubenswrapper[4730]: I0320 17:05:00.066886 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b92f799a-be4e-45a1-9e2e-c93c4992c9ce/rabbitmq/0.log" Mar 20 17:05:00 crc kubenswrapper[4730]: I0320 17:05:00.079373 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b92f799a-be4e-45a1-9e2e-c93c4992c9ce/setup-container/0.log" Mar 20 17:05:00 crc kubenswrapper[4730]: I0320 17:05:00.110742 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_707f8f93-76f2-4472-a015-5dccae194c5e/setup-container/0.log" Mar 20 17:05:00 crc kubenswrapper[4730]: I0320 17:05:00.369499 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_707f8f93-76f2-4472-a015-5dccae194c5e/setup-container/0.log" Mar 20 17:05:00 crc kubenswrapper[4730]: I0320 17:05:00.457969 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_707f8f93-76f2-4472-a015-5dccae194c5e/rabbitmq/0.log" Mar 20 17:05:00 crc kubenswrapper[4730]: I0320 17:05:00.483241 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-fcg5s_b49a7544-a685-49c3-81fa-e1bbec4453ba/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 20 17:05:01 crc kubenswrapper[4730]: I0320 17:05:01.173725 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-q8dm9_129ce6b6-b215-4ca0-9583-78aae3c2371c/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Mar 20 17:05:01 crc kubenswrapper[4730]: I0320 17:05:01.236065 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-l27lg_16667e9d-1075-4c26-8002-61c737a8f76a/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Mar 20 17:05:01 crc kubenswrapper[4730]: I0320 17:05:01.447280 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-ckzj9_0cdca1cb-76d0-4486-8ae9-d67a1ed9b79c/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 20 17:05:01 crc kubenswrapper[4730]: I0320 17:05:01.494021 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-4mmgp_eebb2eb5-4553-41b0-85e6-81e470576d50/ssh-known-hosts-edpm-deployment/0.log" Mar 20 17:05:01 crc kubenswrapper[4730]: I0320 17:05:01.728980 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7c5c8ffdd9-xpfhf_b9780622-27f3-4339-8107-321feed5e25b/proxy-server/0.log" Mar 20 17:05:01 crc kubenswrapper[4730]: I0320 17:05:01.839848 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7c5c8ffdd9-xpfhf_b9780622-27f3-4339-8107-321feed5e25b/proxy-httpd/0.log" Mar 20 17:05:01 crc kubenswrapper[4730]: I0320 17:05:01.901359 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-7d8lv_167282ce-29fc-44db-9b0b-baf2c956f433/swift-ring-rebalance/0.log" Mar 20 17:05:02 crc kubenswrapper[4730]: I0320 17:05:02.611935 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c9def6e-27a0-4543-8d3c-07b3e4005b33/account-auditor/0.log" Mar 20 17:05:02 crc kubenswrapper[4730]: I0320 17:05:02.738421 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c9def6e-27a0-4543-8d3c-07b3e4005b33/account-reaper/0.log" Mar 20 17:05:02 crc kubenswrapper[4730]: I0320 17:05:02.745828 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c9def6e-27a0-4543-8d3c-07b3e4005b33/account-replicator/0.log" Mar 20 17:05:02 crc kubenswrapper[4730]: I0320 17:05:02.754068 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c9def6e-27a0-4543-8d3c-07b3e4005b33/account-server/0.log" Mar 20 17:05:02 crc kubenswrapper[4730]: I0320 17:05:02.820809 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c9def6e-27a0-4543-8d3c-07b3e4005b33/container-auditor/0.log" Mar 20 17:05:02 crc kubenswrapper[4730]: I0320 17:05:02.951987 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c9def6e-27a0-4543-8d3c-07b3e4005b33/container-replicator/0.log" Mar 20 17:05:02 crc kubenswrapper[4730]: I0320 17:05:02.959768 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c9def6e-27a0-4543-8d3c-07b3e4005b33/container-updater/0.log" Mar 20 17:05:02 crc kubenswrapper[4730]: I0320 17:05:02.986969 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c9def6e-27a0-4543-8d3c-07b3e4005b33/container-server/0.log" Mar 20 17:05:03 crc kubenswrapper[4730]: I0320 17:05:03.046072 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c9def6e-27a0-4543-8d3c-07b3e4005b33/object-auditor/0.log" Mar 20 17:05:03 crc kubenswrapper[4730]: I0320 17:05:03.159041 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c9def6e-27a0-4543-8d3c-07b3e4005b33/object-expirer/0.log" Mar 20 17:05:03 crc kubenswrapper[4730]: I0320 17:05:03.163940 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c9def6e-27a0-4543-8d3c-07b3e4005b33/object-server/0.log" Mar 20 17:05:03 crc kubenswrapper[4730]: I0320 17:05:03.243502 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c9def6e-27a0-4543-8d3c-07b3e4005b33/object-replicator/0.log" Mar 20 17:05:03 crc kubenswrapper[4730]: I0320 17:05:03.278235 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c9def6e-27a0-4543-8d3c-07b3e4005b33/object-updater/0.log" Mar 20 17:05:03 crc kubenswrapper[4730]: I0320 17:05:03.359467 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c9def6e-27a0-4543-8d3c-07b3e4005b33/rsync/0.log" Mar 20 17:05:03 crc kubenswrapper[4730]: I0320 17:05:03.429695 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c9def6e-27a0-4543-8d3c-07b3e4005b33/swift-recon-cron/0.log" Mar 20 17:05:03 crc kubenswrapper[4730]: I0320 17:05:03.799302 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_d79eb29a-c814-4aa0-a268-2069d58b08d2/test-operator-logs-container/0.log" Mar 20 17:05:04 crc kubenswrapper[4730]: I0320 17:05:04.075693 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-7pnjt_cdec76f1-b3a6-4c3d-a9d6-3553fd6b1122/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 20 17:05:04 crc kubenswrapper[4730]: I0320 17:05:04.141301 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_c69a80b5-69a7-48c5-8ad4-5063b6cb4676/tempest-tests-tempest-tests-runner/0.log" Mar 20 17:05:04 crc kubenswrapper[4730]: I0320 17:05:04.307892 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-mb8wq_884c2fa6-babb-44b8-b8e2-3e4fbce27153/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Mar 20 17:05:04 crc kubenswrapper[4730]: I0320 17:05:04.910098 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_0d05b7e1-a651-404e-89e9-8276427610fc/watcher-applier/0.log" Mar 20 17:05:05 crc kubenswrapper[4730]: I0320 17:05:05.430166 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_ba310e23-3097-4114-8628-4e7ada94eac6/watcher-api-log/0.log" Mar 20 17:05:07 crc kubenswrapper[4730]: I0320 17:05:07.850381 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_4adb002b-165b-4e7c-9e26-0a98f30dd467/watcher-decision-engine/0.log" Mar 20 17:05:08 crc kubenswrapper[4730]: I0320 17:05:08.748317 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_ba310e23-3097-4114-8628-4e7ada94eac6/watcher-api/0.log" Mar 20 17:05:12 crc kubenswrapper[4730]: I0320 17:05:12.883099 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 17:05:12 crc kubenswrapper[4730]: I0320 17:05:12.883659 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 17:05:12 crc kubenswrapper[4730]: I0320 17:05:12.883711 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 17:05:12 crc kubenswrapper[4730]: I0320 17:05:12.884569 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c3ac4abf290606f9ec67064e3bf0182c8ecb8c9be4ecf85a1bb60feb02fd27e6"} pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 17:05:12 crc kubenswrapper[4730]: I0320 17:05:12.884620 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" containerID="cri-o://c3ac4abf290606f9ec67064e3bf0182c8ecb8c9be4ecf85a1bb60feb02fd27e6" gracePeriod=600 Mar 20 17:05:13 crc kubenswrapper[4730]: I0320 17:05:13.483909 4730 generic.go:334] "Generic (PLEG): container finished" podID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerID="c3ac4abf290606f9ec67064e3bf0182c8ecb8c9be4ecf85a1bb60feb02fd27e6" exitCode=0 Mar 20 17:05:13 crc kubenswrapper[4730]: I0320 17:05:13.483992 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerDied","Data":"c3ac4abf290606f9ec67064e3bf0182c8ecb8c9be4ecf85a1bb60feb02fd27e6"} Mar 20 17:05:13 crc kubenswrapper[4730]: I0320 17:05:13.484593 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerStarted","Data":"581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca"} Mar 20 17:05:13 crc kubenswrapper[4730]: I0320 17:05:13.484654 4730 scope.go:117] "RemoveContainer" containerID="96fa1a39363ce580a7d7ba62fa4f551be70b7f98a5cbd49713d8269774ccd8b9" Mar 20 17:05:20 crc kubenswrapper[4730]: I0320 17:05:20.960975 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_84bbdebb-43de-41d6-82d4-71b0948c25f8/memcached/0.log" Mar 20 17:05:39 crc kubenswrapper[4730]: I0320 17:05:39.620352 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt_8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a/util/0.log" Mar 20 17:05:39 crc kubenswrapper[4730]: I0320 17:05:39.812948 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt_8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a/pull/0.log" Mar 20 17:05:39 crc kubenswrapper[4730]: I0320 17:05:39.822165 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt_8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a/util/0.log" Mar 20 17:05:39 crc kubenswrapper[4730]: I0320 17:05:39.839669 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt_8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a/pull/0.log" Mar 20 17:05:40 crc kubenswrapper[4730]: I0320 17:05:40.063284 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt_8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a/util/0.log" Mar 20 17:05:40 crc kubenswrapper[4730]: I0320 17:05:40.064266 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt_8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a/extract/0.log" Mar 20 17:05:40 crc kubenswrapper[4730]: I0320 17:05:40.074019 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6e5835fdb519dbfa6963d219a5ec71b0d73708b5df93d36b8aba1618bammpbt_8b2334c1-9644-4fd3-9ea3-984ebcd8dc5a/pull/0.log" Mar 20 17:05:40 crc kubenswrapper[4730]: I0320 17:05:40.283204 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59bc569d95-dmd8z_4fb51ed6-04e3-40db-ab21-eb0fe66442fe/manager/0.log" Mar 20 17:05:40 crc kubenswrapper[4730]: I0320 17:05:40.480131 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-588d4d986b-nwwzc_e8ad6f56-863f-473b-a4d4-d4f70d9489a4/manager/0.log" Mar 20 17:05:40 crc kubenswrapper[4730]: I0320 17:05:40.616815 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-79df6bcc97-llp6b_d658514c-f369-4ce2-ad50-d055fd208694/manager/0.log" Mar 20 17:05:40 crc kubenswrapper[4730]: I0320 17:05:40.766022 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-67dd5f86f5-v96m5_acffaecc-dd6c-4819-91cf-99c5d0154143/manager/0.log" Mar 20 17:05:40 crc kubenswrapper[4730]: I0320 17:05:40.951499 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-8464cc45fb-pf8sw_f733406e-5258-4cfe-870d-4fb86152363e/manager/0.log" Mar 20 17:05:41 crc kubenswrapper[4730]: I0320 17:05:41.209301 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6f787dddc9-9k6lh_24280954-941c-445f-aa52-e360ce544046/manager/0.log" Mar 20 17:05:41 crc kubenswrapper[4730]: I0320 17:05:41.521649 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-768b96df4c-g4kgd_cf3ded14-d81b-4384-93e4-e51cde6a31ec/manager/0.log" Mar 20 17:05:41 crc kubenswrapper[4730]: I0320 17:05:41.565181 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7b9c774f96-4pkr9_d8b68e41-b53d-4fb3-8a86-0c604cda0e46/manager/0.log" Mar 20 17:05:41 crc kubenswrapper[4730]: I0320 17:05:41.758340 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-55f864c847-bqjxs_87b37583-ab1d-4f9e-98e9-8cb9bdcc5165/manager/0.log" Mar 20 17:05:41 crc kubenswrapper[4730]: I0320 17:05:41.899488 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67ccfc9778-rnx2d_19a5ba3c-9f89-43f6-bd55-6998df2e3533/manager/0.log" Mar 20 17:05:42 crc kubenswrapper[4730]: I0320 17:05:42.093985 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d58dc466-wqqnd_c5aaa9e9-aebc-4daa-b7ab-c6064b5a78ef/manager/0.log" Mar 20 17:05:42 crc kubenswrapper[4730]: I0320 17:05:42.100140 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-767865f676-xw6kk_61755ffd-de91-4a38-a174-fe1a4c57dfd0/manager/0.log" Mar 20 17:05:42 crc kubenswrapper[4730]: I0320 17:05:42.271650 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5d488d59fb-l7v9q_36dd23cb-43b2-4c25-9e24-3e2f69f93eff/manager/0.log" Mar 20 17:05:42 crc kubenswrapper[4730]: I0320 17:05:42.299407 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5b9f45d989-w8x5z_d7ad408f-56db-4b5b-bea9-ba821eae2b80/manager/0.log" Mar 20 17:05:42 crc kubenswrapper[4730]: I0320 17:05:42.455875 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-89d64c458-f8l2x_8f74be61-d309-417c-90a3-2962b57071c4/manager/0.log" Mar 20 17:05:42 crc kubenswrapper[4730]: I0320 17:05:42.598179 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-646f48576b-5p6h9_d85bb2c7-8dba-4091-a6cf-12cf58bf64a9/operator/0.log" Mar 20 17:05:42 crc kubenswrapper[4730]: I0320 17:05:42.845455 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-f9xcd_d125a115-3173-4a52-8794-2832951fa428/registry-server/0.log" Mar 20 17:05:43 crc kubenswrapper[4730]: I0320 17:05:43.071779 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-884679f54-t7kkm_6944c865-92a4-441c-907b-27424898cb99/manager/0.log" Mar 20 17:05:43 crc kubenswrapper[4730]: I0320 17:05:43.114003 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5784578c99-lt49w_9944d85d-4f1c-4312-ac57-49ee75a8fd16/manager/0.log" Mar 20 17:05:43 crc kubenswrapper[4730]: I0320 17:05:43.411853 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-mdzv5_82cae974-2029-42c3-81bf-e9bee167e991/operator/0.log" Mar 20 17:05:43 crc kubenswrapper[4730]: I0320 17:05:43.899691 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6f58c59cbb-76ssq_c66e7fcc-f4ab-4d70-ad2b-b9186a4a2008/manager/0.log" Mar 20 17:05:43 crc kubenswrapper[4730]: I0320 17:05:43.984782 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-c674c5965-6f2w8_db4a9305-eefd-4804-ac7a-4d811bd928f5/manager/0.log" Mar 20 17:05:44 crc kubenswrapper[4730]: I0320 17:05:44.142876 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5c5cb9c4d7-bm7hr_92c29eff-b9ab-4420-86c6-6b388cfc87af/manager/0.log" Mar 20 17:05:44 crc kubenswrapper[4730]: I0320 17:05:44.162966 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-d6b694c5-lrpjm_cdbd62c8-9960-4257-87d9-d4923c7ef8dd/manager/0.log" Mar 20 17:05:44 crc kubenswrapper[4730]: I0320 17:05:44.333910 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6c5858c67b-cfmtk_f00b4813-358d-49c4-bf9d-486e35f5a94f/manager/0.log" Mar 20 17:06:00 crc kubenswrapper[4730]: I0320 17:06:00.154361 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567106-pg9p8"] Mar 20 17:06:00 crc kubenswrapper[4730]: E0320 17:06:00.155776 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe373a9-f4c2-49e5-a90e-9e50cae59b9d" containerName="registry-server" Mar 20 17:06:00 crc kubenswrapper[4730]: I0320 17:06:00.155806 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe373a9-f4c2-49e5-a90e-9e50cae59b9d" containerName="registry-server" Mar 20 17:06:00 crc kubenswrapper[4730]: E0320 17:06:00.155827 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe373a9-f4c2-49e5-a90e-9e50cae59b9d" containerName="extract-utilities" Mar 20 17:06:00 crc kubenswrapper[4730]: I0320 17:06:00.155840 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe373a9-f4c2-49e5-a90e-9e50cae59b9d" containerName="extract-utilities" Mar 20 17:06:00 crc kubenswrapper[4730]: E0320 17:06:00.155893 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe373a9-f4c2-49e5-a90e-9e50cae59b9d" containerName="extract-content" Mar 20 17:06:00 crc kubenswrapper[4730]: I0320 17:06:00.155907 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe373a9-f4c2-49e5-a90e-9e50cae59b9d" containerName="extract-content" Mar 20 17:06:00 crc kubenswrapper[4730]: I0320 17:06:00.156282 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfe373a9-f4c2-49e5-a90e-9e50cae59b9d" containerName="registry-server" Mar 20 17:06:00 crc kubenswrapper[4730]: I0320 17:06:00.157496 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567106-pg9p8" Mar 20 17:06:00 crc kubenswrapper[4730]: I0320 17:06:00.159452 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 17:06:00 crc kubenswrapper[4730]: I0320 17:06:00.159654 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 17:06:00 crc kubenswrapper[4730]: I0320 17:06:00.159948 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 17:06:00 crc kubenswrapper[4730]: I0320 17:06:00.163266 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567106-pg9p8"] Mar 20 17:06:00 crc kubenswrapper[4730]: I0320 17:06:00.220420 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jdhf\" (UniqueName: \"kubernetes.io/projected/9695c6e8-6be3-4465-95a1-887c6a568fb7-kube-api-access-4jdhf\") pod \"auto-csr-approver-29567106-pg9p8\" (UID: \"9695c6e8-6be3-4465-95a1-887c6a568fb7\") " pod="openshift-infra/auto-csr-approver-29567106-pg9p8" Mar 20 17:06:00 crc kubenswrapper[4730]: I0320 17:06:00.323311 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jdhf\" (UniqueName: \"kubernetes.io/projected/9695c6e8-6be3-4465-95a1-887c6a568fb7-kube-api-access-4jdhf\") pod \"auto-csr-approver-29567106-pg9p8\" (UID: \"9695c6e8-6be3-4465-95a1-887c6a568fb7\") " pod="openshift-infra/auto-csr-approver-29567106-pg9p8" Mar 20 17:06:00 crc kubenswrapper[4730]: I0320 17:06:00.345963 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jdhf\" (UniqueName: \"kubernetes.io/projected/9695c6e8-6be3-4465-95a1-887c6a568fb7-kube-api-access-4jdhf\") pod \"auto-csr-approver-29567106-pg9p8\" (UID: \"9695c6e8-6be3-4465-95a1-887c6a568fb7\") " pod="openshift-infra/auto-csr-approver-29567106-pg9p8" Mar 20 17:06:00 crc kubenswrapper[4730]: I0320 17:06:00.488923 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567106-pg9p8" Mar 20 17:06:00 crc kubenswrapper[4730]: I0320 17:06:00.991931 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567106-pg9p8"] Mar 20 17:06:01 crc kubenswrapper[4730]: I0320 17:06:01.035064 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567106-pg9p8" event={"ID":"9695c6e8-6be3-4465-95a1-887c6a568fb7","Type":"ContainerStarted","Data":"d0e8ff4e8f7264bc67ea642d0b97ca002626697a4ef1bbaa4564b31ddd2b7e19"} Mar 20 17:06:03 crc kubenswrapper[4730]: I0320 17:06:03.059215 4730 generic.go:334] "Generic (PLEG): container finished" podID="9695c6e8-6be3-4465-95a1-887c6a568fb7" containerID="e24f74530672b4126d2aef0eaec17c584c50b3452f9280f1c1dd7481992b500e" exitCode=0 Mar 20 17:06:03 crc kubenswrapper[4730]: I0320 17:06:03.059283 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567106-pg9p8" event={"ID":"9695c6e8-6be3-4465-95a1-887c6a568fb7","Type":"ContainerDied","Data":"e24f74530672b4126d2aef0eaec17c584c50b3452f9280f1c1dd7481992b500e"} Mar 20 17:06:04 crc kubenswrapper[4730]: I0320 17:06:04.481795 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567106-pg9p8" Mar 20 17:06:04 crc kubenswrapper[4730]: I0320 17:06:04.523503 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jdhf\" (UniqueName: \"kubernetes.io/projected/9695c6e8-6be3-4465-95a1-887c6a568fb7-kube-api-access-4jdhf\") pod \"9695c6e8-6be3-4465-95a1-887c6a568fb7\" (UID: \"9695c6e8-6be3-4465-95a1-887c6a568fb7\") " Mar 20 17:06:04 crc kubenswrapper[4730]: I0320 17:06:04.533477 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9695c6e8-6be3-4465-95a1-887c6a568fb7-kube-api-access-4jdhf" (OuterVolumeSpecName: "kube-api-access-4jdhf") pod "9695c6e8-6be3-4465-95a1-887c6a568fb7" (UID: "9695c6e8-6be3-4465-95a1-887c6a568fb7"). InnerVolumeSpecName "kube-api-access-4jdhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 17:06:04 crc kubenswrapper[4730]: I0320 17:06:04.627592 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jdhf\" (UniqueName: \"kubernetes.io/projected/9695c6e8-6be3-4465-95a1-887c6a568fb7-kube-api-access-4jdhf\") on node \"crc\" DevicePath \"\"" Mar 20 17:06:05 crc kubenswrapper[4730]: I0320 17:06:05.080269 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567106-pg9p8" event={"ID":"9695c6e8-6be3-4465-95a1-887c6a568fb7","Type":"ContainerDied","Data":"d0e8ff4e8f7264bc67ea642d0b97ca002626697a4ef1bbaa4564b31ddd2b7e19"} Mar 20 17:06:05 crc kubenswrapper[4730]: I0320 17:06:05.080558 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0e8ff4e8f7264bc67ea642d0b97ca002626697a4ef1bbaa4564b31ddd2b7e19" Mar 20 17:06:05 crc kubenswrapper[4730]: I0320 17:06:05.080391 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567106-pg9p8" Mar 20 17:06:05 crc kubenswrapper[4730]: I0320 17:06:05.555777 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567100-vgj2v"] Mar 20 17:06:05 crc kubenswrapper[4730]: I0320 17:06:05.565450 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567100-vgj2v"] Mar 20 17:06:07 crc kubenswrapper[4730]: I0320 17:06:07.546231 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1293fe12-0f59-44fb-b726-9d72c790dabd" path="/var/lib/kubelet/pods/1293fe12-0f59-44fb-b726-9d72c790dabd/volumes" Mar 20 17:06:08 crc kubenswrapper[4730]: I0320 17:06:08.591009 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-jkk9s_c9f80b42-cff3-48a7-9e09-02ff65e9d9f8/control-plane-machine-set-operator/0.log" Mar 20 17:06:08 crc kubenswrapper[4730]: I0320 17:06:08.806474 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-k6z2l_5c0e41b3-aa2d-4083-acb2-f0f68a29fcce/kube-rbac-proxy/0.log" Mar 20 17:06:08 crc kubenswrapper[4730]: I0320 17:06:08.884128 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-k6z2l_5c0e41b3-aa2d-4083-acb2-f0f68a29fcce/machine-api-operator/0.log" Mar 20 17:06:23 crc kubenswrapper[4730]: I0320 17:06:23.518410 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-dwg9x_096957e4-5a35-42f7-adf0-cac7672589a4/cert-manager-controller/0.log" Mar 20 17:06:23 crc kubenswrapper[4730]: I0320 17:06:23.698593 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-89r9d_b59581d5-071c-4764-9ef6-50ea4724e0a6/cert-manager-cainjector/0.log" Mar 20 17:06:23 crc kubenswrapper[4730]: I0320 17:06:23.725815 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-qcz52_e7c6b209-7bad-4eb0-b8d0-61a602be9b89/cert-manager-webhook/0.log" Mar 20 17:06:29 crc kubenswrapper[4730]: I0320 17:06:29.344497 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7drps"] Mar 20 17:06:29 crc kubenswrapper[4730]: E0320 17:06:29.346021 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9695c6e8-6be3-4465-95a1-887c6a568fb7" containerName="oc" Mar 20 17:06:29 crc kubenswrapper[4730]: I0320 17:06:29.346046 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="9695c6e8-6be3-4465-95a1-887c6a568fb7" containerName="oc" Mar 20 17:06:29 crc kubenswrapper[4730]: I0320 17:06:29.346563 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="9695c6e8-6be3-4465-95a1-887c6a568fb7" containerName="oc" Mar 20 17:06:29 crc kubenswrapper[4730]: I0320 17:06:29.364892 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7drps"] Mar 20 17:06:29 crc kubenswrapper[4730]: I0320 17:06:29.365060 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7drps" Mar 20 17:06:29 crc kubenswrapper[4730]: I0320 17:06:29.488222 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81f396fd-c6d6-4332-8cad-cb7aec1d11cf-catalog-content\") pod \"redhat-marketplace-7drps\" (UID: \"81f396fd-c6d6-4332-8cad-cb7aec1d11cf\") " pod="openshift-marketplace/redhat-marketplace-7drps" Mar 20 17:06:29 crc kubenswrapper[4730]: I0320 17:06:29.488415 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5tx5\" (UniqueName: \"kubernetes.io/projected/81f396fd-c6d6-4332-8cad-cb7aec1d11cf-kube-api-access-s5tx5\") pod \"redhat-marketplace-7drps\" (UID: \"81f396fd-c6d6-4332-8cad-cb7aec1d11cf\") " pod="openshift-marketplace/redhat-marketplace-7drps" Mar 20 17:06:29 crc kubenswrapper[4730]: I0320 17:06:29.488485 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81f396fd-c6d6-4332-8cad-cb7aec1d11cf-utilities\") pod \"redhat-marketplace-7drps\" (UID: \"81f396fd-c6d6-4332-8cad-cb7aec1d11cf\") " pod="openshift-marketplace/redhat-marketplace-7drps" Mar 20 17:06:29 crc kubenswrapper[4730]: I0320 17:06:29.591491 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5tx5\" (UniqueName: \"kubernetes.io/projected/81f396fd-c6d6-4332-8cad-cb7aec1d11cf-kube-api-access-s5tx5\") pod \"redhat-marketplace-7drps\" (UID: \"81f396fd-c6d6-4332-8cad-cb7aec1d11cf\") " pod="openshift-marketplace/redhat-marketplace-7drps" Mar 20 17:06:29 crc kubenswrapper[4730]: I0320 17:06:29.591917 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81f396fd-c6d6-4332-8cad-cb7aec1d11cf-utilities\") pod \"redhat-marketplace-7drps\" (UID: \"81f396fd-c6d6-4332-8cad-cb7aec1d11cf\") " pod="openshift-marketplace/redhat-marketplace-7drps" Mar 20 17:06:29 crc kubenswrapper[4730]: I0320 17:06:29.592060 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81f396fd-c6d6-4332-8cad-cb7aec1d11cf-catalog-content\") pod \"redhat-marketplace-7drps\" (UID: \"81f396fd-c6d6-4332-8cad-cb7aec1d11cf\") " pod="openshift-marketplace/redhat-marketplace-7drps" Mar 20 17:06:29 crc kubenswrapper[4730]: I0320 17:06:29.592764 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81f396fd-c6d6-4332-8cad-cb7aec1d11cf-catalog-content\") pod \"redhat-marketplace-7drps\" (UID: \"81f396fd-c6d6-4332-8cad-cb7aec1d11cf\") " pod="openshift-marketplace/redhat-marketplace-7drps" Mar 20 17:06:29 crc kubenswrapper[4730]: I0320 17:06:29.593095 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81f396fd-c6d6-4332-8cad-cb7aec1d11cf-utilities\") pod \"redhat-marketplace-7drps\" (UID: \"81f396fd-c6d6-4332-8cad-cb7aec1d11cf\") " pod="openshift-marketplace/redhat-marketplace-7drps" Mar 20 17:06:29 crc kubenswrapper[4730]: I0320 17:06:29.615151 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5tx5\" (UniqueName: \"kubernetes.io/projected/81f396fd-c6d6-4332-8cad-cb7aec1d11cf-kube-api-access-s5tx5\") pod \"redhat-marketplace-7drps\" (UID: \"81f396fd-c6d6-4332-8cad-cb7aec1d11cf\") " pod="openshift-marketplace/redhat-marketplace-7drps" Mar 20 17:06:29 crc kubenswrapper[4730]: I0320 17:06:29.692507 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7drps" Mar 20 17:06:30 crc kubenswrapper[4730]: I0320 17:06:30.187626 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7drps"] Mar 20 17:06:30 crc kubenswrapper[4730]: I0320 17:06:30.329163 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7drps" event={"ID":"81f396fd-c6d6-4332-8cad-cb7aec1d11cf","Type":"ContainerStarted","Data":"36264ade0ad9d6fee845713f37ec9f2078ade3ea2ad5877b3188271083aef68d"} Mar 20 17:06:31 crc kubenswrapper[4730]: I0320 17:06:31.345006 4730 generic.go:334] "Generic (PLEG): container finished" podID="81f396fd-c6d6-4332-8cad-cb7aec1d11cf" containerID="287ca69ae95c400004ca862c69e6d11e4b4732f5968d9b7179d8e6e6e6ad1719" exitCode=0 Mar 20 17:06:31 crc kubenswrapper[4730]: I0320 17:06:31.345119 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7drps" event={"ID":"81f396fd-c6d6-4332-8cad-cb7aec1d11cf","Type":"ContainerDied","Data":"287ca69ae95c400004ca862c69e6d11e4b4732f5968d9b7179d8e6e6e6ad1719"} Mar 20 17:06:33 crc kubenswrapper[4730]: I0320 17:06:33.375692 4730 generic.go:334] "Generic (PLEG): container finished" podID="81f396fd-c6d6-4332-8cad-cb7aec1d11cf" containerID="9b086c5376f1f623201bf1fea4679d805d3951607b8b5d78337530e561fc7ef0" exitCode=0 Mar 20 17:06:33 crc kubenswrapper[4730]: I0320 17:06:33.375867 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7drps" event={"ID":"81f396fd-c6d6-4332-8cad-cb7aec1d11cf","Type":"ContainerDied","Data":"9b086c5376f1f623201bf1fea4679d805d3951607b8b5d78337530e561fc7ef0"} Mar 20 17:06:34 crc kubenswrapper[4730]: I0320 17:06:34.388822 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7drps" event={"ID":"81f396fd-c6d6-4332-8cad-cb7aec1d11cf","Type":"ContainerStarted","Data":"c209088d7f8758ddccc4f443d60e6fcb50cc66f336529613105424b90c81a25f"} Mar 20 17:06:34 crc kubenswrapper[4730]: I0320 17:06:34.409937 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7drps" podStartSLOduration=2.873022937 podStartE2EDuration="5.409916342s" podCreationTimestamp="2026-03-20 17:06:29 +0000 UTC" firstStartedPulling="2026-03-20 17:06:31.347855093 +0000 UTC m=+5250.561226462" lastFinishedPulling="2026-03-20 17:06:33.884748488 +0000 UTC m=+5253.098119867" observedRunningTime="2026-03-20 17:06:34.404948731 +0000 UTC m=+5253.618320110" watchObservedRunningTime="2026-03-20 17:06:34.409916342 +0000 UTC m=+5253.623287721" Mar 20 17:06:39 crc kubenswrapper[4730]: I0320 17:06:39.693908 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7drps" Mar 20 17:06:39 crc kubenswrapper[4730]: I0320 17:06:39.695224 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7drps" Mar 20 17:06:39 crc kubenswrapper[4730]: I0320 17:06:39.742898 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7drps" Mar 20 17:06:40 crc kubenswrapper[4730]: I0320 17:06:40.308593 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-nnrp6_663e9228-322c-4d6a-8988-0033d5dd587a/nmstate-console-plugin/0.log" Mar 20 17:06:40 crc kubenswrapper[4730]: I0320 17:06:40.498144 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7drps" Mar 20 17:06:40 crc kubenswrapper[4730]: I0320 17:06:40.552425 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7drps"] Mar 20 17:06:40 crc kubenswrapper[4730]: I0320 17:06:40.596789 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-nfr9k_3f50a695-6f8b-42e6-aa4f-3dfd888b6afa/kube-rbac-proxy/0.log" Mar 20 17:06:40 crc kubenswrapper[4730]: I0320 17:06:40.611904 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-6tdt2_0a4f6fcf-7c76-49cf-8f3c-d83879a650f1/nmstate-handler/0.log" Mar 20 17:06:40 crc kubenswrapper[4730]: I0320 17:06:40.704530 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-nfr9k_3f50a695-6f8b-42e6-aa4f-3dfd888b6afa/nmstate-metrics/0.log" Mar 20 17:06:40 crc kubenswrapper[4730]: I0320 17:06:40.821816 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-2qcpg_e3bdfb07-3f68-4262-8116-44b5ea591644/nmstate-operator/0.log" Mar 20 17:06:40 crc kubenswrapper[4730]: I0320 17:06:40.938440 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-nq6dd_0f827638-33ac-4f99-920b-6e9b72db7955/nmstate-webhook/0.log" Mar 20 17:06:42 crc kubenswrapper[4730]: I0320 17:06:42.457219 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7drps" podUID="81f396fd-c6d6-4332-8cad-cb7aec1d11cf" containerName="registry-server" containerID="cri-o://c209088d7f8758ddccc4f443d60e6fcb50cc66f336529613105424b90c81a25f" gracePeriod=2 Mar 20 17:06:43 crc kubenswrapper[4730]: I0320 17:06:43.007779 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7drps" Mar 20 17:06:43 crc kubenswrapper[4730]: I0320 17:06:43.166068 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5tx5\" (UniqueName: \"kubernetes.io/projected/81f396fd-c6d6-4332-8cad-cb7aec1d11cf-kube-api-access-s5tx5\") pod \"81f396fd-c6d6-4332-8cad-cb7aec1d11cf\" (UID: \"81f396fd-c6d6-4332-8cad-cb7aec1d11cf\") " Mar 20 17:06:43 crc kubenswrapper[4730]: I0320 17:06:43.166206 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81f396fd-c6d6-4332-8cad-cb7aec1d11cf-catalog-content\") pod \"81f396fd-c6d6-4332-8cad-cb7aec1d11cf\" (UID: \"81f396fd-c6d6-4332-8cad-cb7aec1d11cf\") " Mar 20 17:06:43 crc kubenswrapper[4730]: I0320 17:06:43.166240 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81f396fd-c6d6-4332-8cad-cb7aec1d11cf-utilities\") pod \"81f396fd-c6d6-4332-8cad-cb7aec1d11cf\" (UID: \"81f396fd-c6d6-4332-8cad-cb7aec1d11cf\") " Mar 20 17:06:43 crc kubenswrapper[4730]: I0320 17:06:43.167479 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81f396fd-c6d6-4332-8cad-cb7aec1d11cf-utilities" (OuterVolumeSpecName: "utilities") pod "81f396fd-c6d6-4332-8cad-cb7aec1d11cf" (UID: "81f396fd-c6d6-4332-8cad-cb7aec1d11cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 17:06:43 crc kubenswrapper[4730]: I0320 17:06:43.186518 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81f396fd-c6d6-4332-8cad-cb7aec1d11cf-kube-api-access-s5tx5" (OuterVolumeSpecName: "kube-api-access-s5tx5") pod "81f396fd-c6d6-4332-8cad-cb7aec1d11cf" (UID: "81f396fd-c6d6-4332-8cad-cb7aec1d11cf"). InnerVolumeSpecName "kube-api-access-s5tx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 17:06:43 crc kubenswrapper[4730]: I0320 17:06:43.201878 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81f396fd-c6d6-4332-8cad-cb7aec1d11cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81f396fd-c6d6-4332-8cad-cb7aec1d11cf" (UID: "81f396fd-c6d6-4332-8cad-cb7aec1d11cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 17:06:43 crc kubenswrapper[4730]: I0320 17:06:43.268439 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81f396fd-c6d6-4332-8cad-cb7aec1d11cf-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 17:06:43 crc kubenswrapper[4730]: I0320 17:06:43.268465 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5tx5\" (UniqueName: \"kubernetes.io/projected/81f396fd-c6d6-4332-8cad-cb7aec1d11cf-kube-api-access-s5tx5\") on node \"crc\" DevicePath \"\"" Mar 20 17:06:43 crc kubenswrapper[4730]: I0320 17:06:43.268476 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81f396fd-c6d6-4332-8cad-cb7aec1d11cf-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 17:06:43 crc kubenswrapper[4730]: I0320 17:06:43.468176 4730 generic.go:334] "Generic (PLEG): container finished" podID="81f396fd-c6d6-4332-8cad-cb7aec1d11cf" containerID="c209088d7f8758ddccc4f443d60e6fcb50cc66f336529613105424b90c81a25f" exitCode=0 Mar 20 17:06:43 crc kubenswrapper[4730]: I0320 17:06:43.468216 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7drps" event={"ID":"81f396fd-c6d6-4332-8cad-cb7aec1d11cf","Type":"ContainerDied","Data":"c209088d7f8758ddccc4f443d60e6fcb50cc66f336529613105424b90c81a25f"} Mar 20 17:06:43 crc kubenswrapper[4730]: I0320 17:06:43.468240 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7drps" event={"ID":"81f396fd-c6d6-4332-8cad-cb7aec1d11cf","Type":"ContainerDied","Data":"36264ade0ad9d6fee845713f37ec9f2078ade3ea2ad5877b3188271083aef68d"} Mar 20 17:06:43 crc kubenswrapper[4730]: I0320 17:06:43.468271 4730 scope.go:117] "RemoveContainer" containerID="c209088d7f8758ddccc4f443d60e6fcb50cc66f336529613105424b90c81a25f" Mar 20 17:06:43 crc kubenswrapper[4730]: I0320 17:06:43.468365 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7drps" Mar 20 17:06:43 crc kubenswrapper[4730]: I0320 17:06:43.488666 4730 scope.go:117] "RemoveContainer" containerID="9b086c5376f1f623201bf1fea4679d805d3951607b8b5d78337530e561fc7ef0" Mar 20 17:06:43 crc kubenswrapper[4730]: I0320 17:06:43.543477 4730 scope.go:117] "RemoveContainer" containerID="287ca69ae95c400004ca862c69e6d11e4b4732f5968d9b7179d8e6e6e6ad1719" Mar 20 17:06:43 crc kubenswrapper[4730]: I0320 17:06:43.570024 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7drps"] Mar 20 17:06:43 crc kubenswrapper[4730]: I0320 17:06:43.570065 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7drps"] Mar 20 17:06:43 crc kubenswrapper[4730]: I0320 17:06:43.587698 4730 scope.go:117] "RemoveContainer" containerID="c209088d7f8758ddccc4f443d60e6fcb50cc66f336529613105424b90c81a25f" Mar 20 17:06:43 crc kubenswrapper[4730]: E0320 17:06:43.588110 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c209088d7f8758ddccc4f443d60e6fcb50cc66f336529613105424b90c81a25f\": container with ID starting with c209088d7f8758ddccc4f443d60e6fcb50cc66f336529613105424b90c81a25f not found: ID does not exist" containerID="c209088d7f8758ddccc4f443d60e6fcb50cc66f336529613105424b90c81a25f" Mar 20 17:06:43 crc kubenswrapper[4730]: I0320 17:06:43.588146 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c209088d7f8758ddccc4f443d60e6fcb50cc66f336529613105424b90c81a25f"} err="failed to get container status \"c209088d7f8758ddccc4f443d60e6fcb50cc66f336529613105424b90c81a25f\": rpc error: code = NotFound desc = could not find container \"c209088d7f8758ddccc4f443d60e6fcb50cc66f336529613105424b90c81a25f\": container with ID starting with c209088d7f8758ddccc4f443d60e6fcb50cc66f336529613105424b90c81a25f not found: ID does not exist" Mar 20 17:06:43 crc kubenswrapper[4730]: I0320 17:06:43.588171 4730 scope.go:117] "RemoveContainer" containerID="9b086c5376f1f623201bf1fea4679d805d3951607b8b5d78337530e561fc7ef0" Mar 20 17:06:43 crc kubenswrapper[4730]: E0320 17:06:43.588434 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b086c5376f1f623201bf1fea4679d805d3951607b8b5d78337530e561fc7ef0\": container with ID starting with 9b086c5376f1f623201bf1fea4679d805d3951607b8b5d78337530e561fc7ef0 not found: ID does not exist" containerID="9b086c5376f1f623201bf1fea4679d805d3951607b8b5d78337530e561fc7ef0" Mar 20 17:06:43 crc kubenswrapper[4730]: I0320 17:06:43.588456 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b086c5376f1f623201bf1fea4679d805d3951607b8b5d78337530e561fc7ef0"} err="failed to get container status \"9b086c5376f1f623201bf1fea4679d805d3951607b8b5d78337530e561fc7ef0\": rpc error: code = NotFound desc = could not find container \"9b086c5376f1f623201bf1fea4679d805d3951607b8b5d78337530e561fc7ef0\": container with ID starting with 9b086c5376f1f623201bf1fea4679d805d3951607b8b5d78337530e561fc7ef0 not found: ID does not exist" Mar 20 17:06:43 crc kubenswrapper[4730]: I0320 17:06:43.588473 4730 scope.go:117] "RemoveContainer" containerID="287ca69ae95c400004ca862c69e6d11e4b4732f5968d9b7179d8e6e6e6ad1719" Mar 20 17:06:43 crc kubenswrapper[4730]: E0320 17:06:43.589160 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"287ca69ae95c400004ca862c69e6d11e4b4732f5968d9b7179d8e6e6e6ad1719\": container with ID starting with 287ca69ae95c400004ca862c69e6d11e4b4732f5968d9b7179d8e6e6e6ad1719 not found: ID does not exist" containerID="287ca69ae95c400004ca862c69e6d11e4b4732f5968d9b7179d8e6e6e6ad1719" Mar 20 17:06:43 crc kubenswrapper[4730]: I0320 17:06:43.589188 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"287ca69ae95c400004ca862c69e6d11e4b4732f5968d9b7179d8e6e6e6ad1719"} err="failed to get container status \"287ca69ae95c400004ca862c69e6d11e4b4732f5968d9b7179d8e6e6e6ad1719\": rpc error: code = NotFound desc = could not find container \"287ca69ae95c400004ca862c69e6d11e4b4732f5968d9b7179d8e6e6e6ad1719\": container with ID starting with 287ca69ae95c400004ca862c69e6d11e4b4732f5968d9b7179d8e6e6e6ad1719 not found: ID does not exist" Mar 20 17:06:45 crc kubenswrapper[4730]: I0320 17:06:45.543694 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81f396fd-c6d6-4332-8cad-cb7aec1d11cf" path="/var/lib/kubelet/pods/81f396fd-c6d6-4332-8cad-cb7aec1d11cf/volumes" Mar 20 17:06:54 crc kubenswrapper[4730]: I0320 17:06:54.252596 4730 scope.go:117] "RemoveContainer" containerID="f0d02d12d3b8583d27ef06f8d4e4230e6d9bdedae9fb10c5b6dcf9c218e3e2d5" Mar 20 17:06:58 crc kubenswrapper[4730]: I0320 17:06:58.516316 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-8ff7d675-w67vt_5db89423-34f0-46c3-9dcf-2179c6c6f42a/prometheus-operator/0.log" Mar 20 17:06:58 crc kubenswrapper[4730]: I0320 17:06:58.605569 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-85c477fd8f-7ttsl_7520ba92-5020-48d1-8d1c-fa20f0f407be/prometheus-operator-admission-webhook/0.log" Mar 20 17:06:58 crc kubenswrapper[4730]: I0320 17:06:58.659597 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-85c477fd8f-d52bp_c12d2a2b-f7db-41be-89e1-97869c8119c2/prometheus-operator-admission-webhook/0.log" Mar 20 17:06:58 crc kubenswrapper[4730]: I0320 17:06:58.820957 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-6dd7dd855f-nh5dg_28a4594d-a811-4533-8d77-40267a80c581/operator/0.log" Mar 20 17:06:58 crc kubenswrapper[4730]: I0320 17:06:58.861601 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-6b8b4f7dbd-pmzhq_50aad4a2-a828-49d9-9bb3-115336081293/perses-operator/0.log" Mar 20 17:07:17 crc kubenswrapper[4730]: I0320 17:07:17.129622 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-jdxzq_a42a5cd0-d730-4d48-8082-2491494e90ff/kube-rbac-proxy/0.log" Mar 20 17:07:17 crc kubenswrapper[4730]: I0320 17:07:17.147071 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-jdxzq_a42a5cd0-d730-4d48-8082-2491494e90ff/controller/0.log" Mar 20 17:07:17 crc kubenswrapper[4730]: I0320 17:07:17.218318 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pbr4w_5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f/cp-frr-files/0.log" Mar 20 17:07:17 crc kubenswrapper[4730]: I0320 17:07:17.445832 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pbr4w_5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f/cp-frr-files/0.log" Mar 20 17:07:17 crc kubenswrapper[4730]: I0320 17:07:17.447187 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pbr4w_5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f/cp-reloader/0.log" Mar 20 17:07:17 crc kubenswrapper[4730]: I0320 17:07:17.453582 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pbr4w_5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f/cp-metrics/0.log" Mar 20 17:07:17 crc kubenswrapper[4730]: I0320 17:07:17.453919 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pbr4w_5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f/cp-reloader/0.log" Mar 20 17:07:17 crc kubenswrapper[4730]: I0320 17:07:17.747144 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pbr4w_5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f/cp-frr-files/0.log" Mar 20 17:07:17 crc kubenswrapper[4730]: I0320 17:07:17.791807 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pbr4w_5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f/cp-metrics/0.log" Mar 20 17:07:17 crc kubenswrapper[4730]: I0320 17:07:17.792036 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pbr4w_5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f/cp-reloader/0.log" Mar 20 17:07:17 crc kubenswrapper[4730]: I0320 17:07:17.792140 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pbr4w_5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f/cp-metrics/0.log" Mar 20 17:07:17 crc kubenswrapper[4730]: I0320 17:07:17.994601 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pbr4w_5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f/cp-reloader/0.log" Mar 20 17:07:18 crc kubenswrapper[4730]: I0320 17:07:18.031738 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pbr4w_5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f/cp-frr-files/0.log" Mar 20 17:07:18 crc kubenswrapper[4730]: I0320 17:07:18.050176 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pbr4w_5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f/controller/0.log" Mar 20 17:07:18 crc kubenswrapper[4730]: I0320 17:07:18.060411 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pbr4w_5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f/cp-metrics/0.log" Mar 20 17:07:18 crc kubenswrapper[4730]: I0320 17:07:18.786530 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pbr4w_5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f/frr-metrics/0.log" Mar 20 17:07:18 crc kubenswrapper[4730]: I0320 17:07:18.807551 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pbr4w_5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f/kube-rbac-proxy-frr/0.log" Mar 20 17:07:18 crc kubenswrapper[4730]: I0320 17:07:18.850412 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pbr4w_5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f/kube-rbac-proxy/0.log" Mar 20 17:07:19 crc kubenswrapper[4730]: I0320 17:07:19.049184 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pbr4w_5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f/reloader/0.log" Mar 20 17:07:19 crc kubenswrapper[4730]: I0320 17:07:19.137107 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-vmgrx_70093cb9-bc43-427d-a8e4-5750058e2580/frr-k8s-webhook-server/0.log" Mar 20 17:07:19 crc kubenswrapper[4730]: I0320 17:07:19.393798 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-85db46595-g556k_b41d974a-1e37-48ae-afdc-48c682c73637/manager/0.log" Mar 20 17:07:19 crc kubenswrapper[4730]: I0320 17:07:19.507105 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5f9794bdc6-ccfwn_9dfba7eb-850f-4e34-a875-8ef219c8c783/webhook-server/0.log" Mar 20 17:07:19 crc kubenswrapper[4730]: I0320 17:07:19.612818 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tbvnw_02f5e1af-23a0-43ef-89ad-9c5af9e98cfd/kube-rbac-proxy/0.log" Mar 20 17:07:20 crc kubenswrapper[4730]: I0320 17:07:20.307958 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tbvnw_02f5e1af-23a0-43ef-89ad-9c5af9e98cfd/speaker/0.log" Mar 20 17:07:20 crc kubenswrapper[4730]: I0320 17:07:20.916208 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pbr4w_5a3acee0-3c35-4a9f-8b2d-0e5c64ca0d1f/frr/0.log" Mar 20 17:07:35 crc kubenswrapper[4730]: I0320 17:07:35.747798 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9_6caa320c-cdca-4f52-aac0-b5c3325396db/util/0.log" Mar 20 17:07:35 crc kubenswrapper[4730]: I0320 17:07:35.920749 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9_6caa320c-cdca-4f52-aac0-b5c3325396db/util/0.log" Mar 20 17:07:35 crc kubenswrapper[4730]: I0320 17:07:35.930889 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9_6caa320c-cdca-4f52-aac0-b5c3325396db/pull/0.log" Mar 20 17:07:35 crc kubenswrapper[4730]: I0320 17:07:35.962896 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9_6caa320c-cdca-4f52-aac0-b5c3325396db/pull/0.log" Mar 20 17:07:36 crc kubenswrapper[4730]: I0320 17:07:36.105828 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9_6caa320c-cdca-4f52-aac0-b5c3325396db/util/0.log" Mar 20 17:07:36 crc kubenswrapper[4730]: I0320 17:07:36.106549 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9_6caa320c-cdca-4f52-aac0-b5c3325396db/pull/0.log" Mar 20 17:07:36 crc kubenswrapper[4730]: I0320 17:07:36.118567 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874ft8z9_6caa320c-cdca-4f52-aac0-b5c3325396db/extract/0.log" Mar 20 17:07:36 crc kubenswrapper[4730]: I0320 17:07:36.302449 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck_aac6cea5-e666-44b1-9507-f57de2361c40/util/0.log" Mar 20 17:07:36 crc kubenswrapper[4730]: I0320 17:07:36.455822 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck_aac6cea5-e666-44b1-9507-f57de2361c40/util/0.log" Mar 20 17:07:36 crc kubenswrapper[4730]: I0320 17:07:36.474874 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck_aac6cea5-e666-44b1-9507-f57de2361c40/pull/0.log" Mar 20 17:07:36 crc kubenswrapper[4730]: I0320 17:07:36.475052 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck_aac6cea5-e666-44b1-9507-f57de2361c40/pull/0.log" Mar 20 17:07:36 crc kubenswrapper[4730]: I0320 17:07:36.642126 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck_aac6cea5-e666-44b1-9507-f57de2361c40/util/0.log" Mar 20 17:07:36 crc kubenswrapper[4730]: I0320 17:07:36.656500 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck_aac6cea5-e666-44b1-9507-f57de2361c40/pull/0.log" Mar 20 17:07:36 crc kubenswrapper[4730]: I0320 17:07:36.693715 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1k77ck_aac6cea5-e666-44b1-9507-f57de2361c40/extract/0.log" Mar 20 17:07:36 crc kubenswrapper[4730]: I0320 17:07:36.803501 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv_25d50abe-8eeb-4761-83b7-d9e7fbb78a76/util/0.log" Mar 20 17:07:37 crc kubenswrapper[4730]: I0320 17:07:37.001416 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv_25d50abe-8eeb-4761-83b7-d9e7fbb78a76/util/0.log" Mar 20 17:07:37 crc kubenswrapper[4730]: I0320 17:07:37.006389 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv_25d50abe-8eeb-4761-83b7-d9e7fbb78a76/pull/0.log" Mar 20 17:07:37 crc kubenswrapper[4730]: I0320 17:07:37.031935 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv_25d50abe-8eeb-4761-83b7-d9e7fbb78a76/pull/0.log" Mar 20 17:07:37 crc kubenswrapper[4730]: I0320 17:07:37.162381 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv_25d50abe-8eeb-4761-83b7-d9e7fbb78a76/util/0.log" Mar 20 17:07:37 crc kubenswrapper[4730]: I0320 17:07:37.184719 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv_25d50abe-8eeb-4761-83b7-d9e7fbb78a76/pull/0.log" Mar 20 17:07:37 crc kubenswrapper[4730]: I0320 17:07:37.201672 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf923972669rzv_25d50abe-8eeb-4761-83b7-d9e7fbb78a76/extract/0.log" Mar 20 17:07:37 crc kubenswrapper[4730]: I0320 17:07:37.341742 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rkhd6_d6e8fab3-7ebb-4b3f-af2c-fcc299e01381/extract-utilities/0.log" Mar 20 17:07:37 crc kubenswrapper[4730]: I0320 17:07:37.508670 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rkhd6_d6e8fab3-7ebb-4b3f-af2c-fcc299e01381/extract-content/0.log" Mar 20 17:07:37 crc kubenswrapper[4730]: I0320 17:07:37.510374 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rkhd6_d6e8fab3-7ebb-4b3f-af2c-fcc299e01381/extract-content/0.log" Mar 20 17:07:37 crc kubenswrapper[4730]: I0320 17:07:37.547940 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rkhd6_d6e8fab3-7ebb-4b3f-af2c-fcc299e01381/extract-utilities/0.log" Mar 20 17:07:37 crc kubenswrapper[4730]: I0320 17:07:37.759922 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rkhd6_d6e8fab3-7ebb-4b3f-af2c-fcc299e01381/extract-content/0.log" Mar 20 17:07:37 crc kubenswrapper[4730]: I0320 17:07:37.763085 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rkhd6_d6e8fab3-7ebb-4b3f-af2c-fcc299e01381/extract-utilities/0.log" Mar 20 17:07:37 crc kubenswrapper[4730]: I0320 17:07:37.962538 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bbtzz_d8a22a9f-2975-485c-99f7-05e6b934e0a1/extract-utilities/0.log" Mar 20 17:07:38 crc kubenswrapper[4730]: I0320 17:07:38.142753 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bbtzz_d8a22a9f-2975-485c-99f7-05e6b934e0a1/extract-utilities/0.log" Mar 20 17:07:38 crc kubenswrapper[4730]: I0320 17:07:38.173616 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rkhd6_d6e8fab3-7ebb-4b3f-af2c-fcc299e01381/registry-server/0.log" Mar 20 17:07:38 crc kubenswrapper[4730]: I0320 17:07:38.359154 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bbtzz_d8a22a9f-2975-485c-99f7-05e6b934e0a1/extract-content/0.log" Mar 20 17:07:38 crc kubenswrapper[4730]: I0320 17:07:38.363898 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bbtzz_d8a22a9f-2975-485c-99f7-05e6b934e0a1/extract-content/0.log" Mar 20 17:07:38 crc kubenswrapper[4730]: I0320 17:07:38.511582 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bbtzz_d8a22a9f-2975-485c-99f7-05e6b934e0a1/extract-utilities/0.log" Mar 20 17:07:38 crc kubenswrapper[4730]: I0320 17:07:38.528535 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bbtzz_d8a22a9f-2975-485c-99f7-05e6b934e0a1/extract-content/0.log" Mar 20 17:07:38 crc kubenswrapper[4730]: I0320 17:07:38.745621 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-b842f_b3eaa81f-92a9-49fa-aca0-1e8e35920f20/marketplace-operator/0.log" Mar 20 17:07:38 crc kubenswrapper[4730]: I0320 17:07:38.824230 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7vhhm_cae6da2c-50d0-460f-b29c-5b3e3df439c5/extract-utilities/0.log" Mar 20 17:07:39 crc kubenswrapper[4730]: I0320 17:07:39.071015 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7vhhm_cae6da2c-50d0-460f-b29c-5b3e3df439c5/extract-utilities/0.log" Mar 20 17:07:39 crc kubenswrapper[4730]: I0320 17:07:39.094125 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7vhhm_cae6da2c-50d0-460f-b29c-5b3e3df439c5/extract-content/0.log" Mar 20 17:07:39 crc kubenswrapper[4730]: I0320 17:07:39.137777 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7vhhm_cae6da2c-50d0-460f-b29c-5b3e3df439c5/extract-content/0.log" Mar 20 17:07:39 crc kubenswrapper[4730]: I0320 17:07:39.360699 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7vhhm_cae6da2c-50d0-460f-b29c-5b3e3df439c5/extract-utilities/0.log" Mar 20 17:07:39 crc kubenswrapper[4730]: I0320 17:07:39.364998 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bbtzz_d8a22a9f-2975-485c-99f7-05e6b934e0a1/registry-server/0.log" Mar 20 17:07:39 crc kubenswrapper[4730]: I0320 17:07:39.387077 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7vhhm_cae6da2c-50d0-460f-b29c-5b3e3df439c5/extract-content/0.log" Mar 20 17:07:39 crc kubenswrapper[4730]: I0320 17:07:39.545021 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7vhhm_cae6da2c-50d0-460f-b29c-5b3e3df439c5/registry-server/0.log" Mar 20 17:07:39 crc kubenswrapper[4730]: I0320 17:07:39.963444 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vk6rc_70d03566-9776-4dcc-84b5-17281f8ae66e/extract-utilities/0.log" Mar 20 17:07:40 crc kubenswrapper[4730]: I0320 17:07:40.147019 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vk6rc_70d03566-9776-4dcc-84b5-17281f8ae66e/extract-utilities/0.log" Mar 20 17:07:40 crc kubenswrapper[4730]: I0320 17:07:40.166332 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vk6rc_70d03566-9776-4dcc-84b5-17281f8ae66e/extract-content/0.log" Mar 20 17:07:40 crc kubenswrapper[4730]: I0320 17:07:40.215074 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vk6rc_70d03566-9776-4dcc-84b5-17281f8ae66e/extract-content/0.log" Mar 20 17:07:40 crc kubenswrapper[4730]: I0320 17:07:40.375197 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vk6rc_70d03566-9776-4dcc-84b5-17281f8ae66e/extract-utilities/0.log" Mar 20 17:07:40 crc kubenswrapper[4730]: I0320 17:07:40.385298 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vk6rc_70d03566-9776-4dcc-84b5-17281f8ae66e/extract-content/0.log" Mar 20 17:07:41 crc kubenswrapper[4730]: I0320 17:07:41.175986 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vk6rc_70d03566-9776-4dcc-84b5-17281f8ae66e/registry-server/0.log" Mar 20 17:07:42 crc kubenswrapper[4730]: I0320 17:07:42.880698 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 17:07:42 crc kubenswrapper[4730]: I0320 17:07:42.881190 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 17:07:56 crc kubenswrapper[4730]: I0320 17:07:56.735633 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-85c477fd8f-d52bp_c12d2a2b-f7db-41be-89e1-97869c8119c2/prometheus-operator-admission-webhook/0.log" Mar 20 17:07:56 crc kubenswrapper[4730]: I0320 17:07:56.740579 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-8ff7d675-w67vt_5db89423-34f0-46c3-9dcf-2179c6c6f42a/prometheus-operator/0.log" Mar 20 17:07:56 crc kubenswrapper[4730]: I0320 17:07:56.812774 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-85c477fd8f-7ttsl_7520ba92-5020-48d1-8d1c-fa20f0f407be/prometheus-operator-admission-webhook/0.log" Mar 20 17:07:56 crc kubenswrapper[4730]: I0320 17:07:56.916727 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-6b8b4f7dbd-pmzhq_50aad4a2-a828-49d9-9bb3-115336081293/perses-operator/0.log" Mar 20 17:07:56 crc kubenswrapper[4730]: I0320 17:07:56.917664 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-6dd7dd855f-nh5dg_28a4594d-a811-4533-8d77-40267a80c581/operator/0.log" Mar 20 17:08:00 crc kubenswrapper[4730]: I0320 17:08:00.170015 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567108-zftgj"] Mar 20 17:08:00 crc kubenswrapper[4730]: E0320 17:08:00.170949 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f396fd-c6d6-4332-8cad-cb7aec1d11cf" containerName="extract-utilities" Mar 20 17:08:00 crc kubenswrapper[4730]: I0320 17:08:00.170973 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f396fd-c6d6-4332-8cad-cb7aec1d11cf" containerName="extract-utilities" Mar 20 17:08:00 crc kubenswrapper[4730]: E0320 17:08:00.171024 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f396fd-c6d6-4332-8cad-cb7aec1d11cf" containerName="extract-content" Mar 20 17:08:00 crc kubenswrapper[4730]: I0320 17:08:00.171036 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f396fd-c6d6-4332-8cad-cb7aec1d11cf" containerName="extract-content" Mar 20 17:08:00 crc kubenswrapper[4730]: E0320 17:08:00.171081 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f396fd-c6d6-4332-8cad-cb7aec1d11cf" containerName="registry-server" Mar 20 17:08:00 crc kubenswrapper[4730]: I0320 17:08:00.171091 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f396fd-c6d6-4332-8cad-cb7aec1d11cf" containerName="registry-server" Mar 20 17:08:00 crc kubenswrapper[4730]: I0320 17:08:00.171436 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f396fd-c6d6-4332-8cad-cb7aec1d11cf" containerName="registry-server" Mar 20 17:08:00 crc kubenswrapper[4730]: I0320 17:08:00.172565 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567108-zftgj" Mar 20 17:08:00 crc kubenswrapper[4730]: I0320 17:08:00.177192 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 17:08:00 crc kubenswrapper[4730]: I0320 17:08:00.180735 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 17:08:00 crc kubenswrapper[4730]: I0320 17:08:00.180793 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 17:08:00 crc kubenswrapper[4730]: I0320 17:08:00.191018 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567108-zftgj"] Mar 20 17:08:00 crc kubenswrapper[4730]: I0320 17:08:00.290395 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnfjb\" (UniqueName: \"kubernetes.io/projected/ee25ae29-2b59-43fa-bee7-ff759f2b962d-kube-api-access-dnfjb\") pod \"auto-csr-approver-29567108-zftgj\" (UID: \"ee25ae29-2b59-43fa-bee7-ff759f2b962d\") " pod="openshift-infra/auto-csr-approver-29567108-zftgj" Mar 20 17:08:00 crc kubenswrapper[4730]: I0320 17:08:00.392655 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnfjb\" (UniqueName: \"kubernetes.io/projected/ee25ae29-2b59-43fa-bee7-ff759f2b962d-kube-api-access-dnfjb\") pod \"auto-csr-approver-29567108-zftgj\" (UID: \"ee25ae29-2b59-43fa-bee7-ff759f2b962d\") " pod="openshift-infra/auto-csr-approver-29567108-zftgj" Mar 20 17:08:00 crc kubenswrapper[4730]: I0320 17:08:00.416159 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnfjb\" (UniqueName: \"kubernetes.io/projected/ee25ae29-2b59-43fa-bee7-ff759f2b962d-kube-api-access-dnfjb\") pod \"auto-csr-approver-29567108-zftgj\" (UID: \"ee25ae29-2b59-43fa-bee7-ff759f2b962d\") " pod="openshift-infra/auto-csr-approver-29567108-zftgj" Mar 20 17:08:00 crc kubenswrapper[4730]: I0320 17:08:00.503205 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567108-zftgj" Mar 20 17:08:01 crc kubenswrapper[4730]: I0320 17:08:01.003881 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567108-zftgj"] Mar 20 17:08:01 crc kubenswrapper[4730]: I0320 17:08:01.316573 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567108-zftgj" event={"ID":"ee25ae29-2b59-43fa-bee7-ff759f2b962d","Type":"ContainerStarted","Data":"30cc381bc8fa85c62d2867894ba73e17d2109616119bf0ef41530f071962c2e3"} Mar 20 17:08:03 crc kubenswrapper[4730]: I0320 17:08:03.335707 4730 generic.go:334] "Generic (PLEG): container finished" podID="ee25ae29-2b59-43fa-bee7-ff759f2b962d" containerID="b89905d33f72768b38a357a1f6d9426d8d5d00caccec7854337ced1d8a3cac16" exitCode=0 Mar 20 17:08:03 crc kubenswrapper[4730]: I0320 17:08:03.335875 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567108-zftgj" event={"ID":"ee25ae29-2b59-43fa-bee7-ff759f2b962d","Type":"ContainerDied","Data":"b89905d33f72768b38a357a1f6d9426d8d5d00caccec7854337ced1d8a3cac16"} Mar 20 17:08:04 crc kubenswrapper[4730]: I0320 17:08:04.754046 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567108-zftgj" Mar 20 17:08:04 crc kubenswrapper[4730]: I0320 17:08:04.942584 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnfjb\" (UniqueName: \"kubernetes.io/projected/ee25ae29-2b59-43fa-bee7-ff759f2b962d-kube-api-access-dnfjb\") pod \"ee25ae29-2b59-43fa-bee7-ff759f2b962d\" (UID: \"ee25ae29-2b59-43fa-bee7-ff759f2b962d\") " Mar 20 17:08:04 crc kubenswrapper[4730]: I0320 17:08:04.954664 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee25ae29-2b59-43fa-bee7-ff759f2b962d-kube-api-access-dnfjb" (OuterVolumeSpecName: "kube-api-access-dnfjb") pod "ee25ae29-2b59-43fa-bee7-ff759f2b962d" (UID: "ee25ae29-2b59-43fa-bee7-ff759f2b962d"). InnerVolumeSpecName "kube-api-access-dnfjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 17:08:05 crc kubenswrapper[4730]: I0320 17:08:05.045006 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnfjb\" (UniqueName: \"kubernetes.io/projected/ee25ae29-2b59-43fa-bee7-ff759f2b962d-kube-api-access-dnfjb\") on node \"crc\" DevicePath \"\"" Mar 20 17:08:05 crc kubenswrapper[4730]: I0320 17:08:05.355315 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567108-zftgj" event={"ID":"ee25ae29-2b59-43fa-bee7-ff759f2b962d","Type":"ContainerDied","Data":"30cc381bc8fa85c62d2867894ba73e17d2109616119bf0ef41530f071962c2e3"} Mar 20 17:08:05 crc kubenswrapper[4730]: I0320 17:08:05.355529 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30cc381bc8fa85c62d2867894ba73e17d2109616119bf0ef41530f071962c2e3" Mar 20 17:08:05 crc kubenswrapper[4730]: I0320 17:08:05.355637 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567108-zftgj" Mar 20 17:08:05 crc kubenswrapper[4730]: I0320 17:08:05.842996 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567102-k48jl"] Mar 20 17:08:05 crc kubenswrapper[4730]: I0320 17:08:05.854504 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567102-k48jl"] Mar 20 17:08:07 crc kubenswrapper[4730]: I0320 17:08:07.543916 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d317b26b-912f-4276-a234-084782092ff3" path="/var/lib/kubelet/pods/d317b26b-912f-4276-a234-084782092ff3/volumes" Mar 20 17:08:12 crc kubenswrapper[4730]: I0320 17:08:12.882749 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 17:08:12 crc kubenswrapper[4730]: I0320 17:08:12.883348 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 17:08:42 crc kubenswrapper[4730]: I0320 17:08:42.879963 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 17:08:42 crc kubenswrapper[4730]: I0320 17:08:42.880642 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 17:08:42 crc kubenswrapper[4730]: I0320 17:08:42.880699 4730 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" Mar 20 17:08:42 crc kubenswrapper[4730]: I0320 17:08:42.881496 4730 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca"} pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 17:08:42 crc kubenswrapper[4730]: I0320 17:08:42.881567 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" containerID="cri-o://581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca" gracePeriod=600 Mar 20 17:08:43 crc kubenswrapper[4730]: E0320 17:08:43.019981 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:08:43 crc kubenswrapper[4730]: I0320 17:08:43.805746 4730 generic.go:334] "Generic (PLEG): container finished" podID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerID="581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca" exitCode=0 Mar 20 17:08:43 crc kubenswrapper[4730]: I0320 17:08:43.805808 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerDied","Data":"581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca"} Mar 20 17:08:43 crc kubenswrapper[4730]: I0320 17:08:43.805893 4730 scope.go:117] "RemoveContainer" containerID="c3ac4abf290606f9ec67064e3bf0182c8ecb8c9be4ecf85a1bb60feb02fd27e6" Mar 20 17:08:43 crc kubenswrapper[4730]: I0320 17:08:43.806847 4730 scope.go:117] "RemoveContainer" containerID="581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca" Mar 20 17:08:43 crc kubenswrapper[4730]: E0320 17:08:43.807781 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:08:51 crc kubenswrapper[4730]: I0320 17:08:51.650265 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tzjmg"] Mar 20 17:08:51 crc kubenswrapper[4730]: E0320 17:08:51.651202 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee25ae29-2b59-43fa-bee7-ff759f2b962d" containerName="oc" Mar 20 17:08:51 crc kubenswrapper[4730]: I0320 17:08:51.651216 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee25ae29-2b59-43fa-bee7-ff759f2b962d" containerName="oc" Mar 20 17:08:51 crc kubenswrapper[4730]: I0320 17:08:51.651554 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee25ae29-2b59-43fa-bee7-ff759f2b962d" containerName="oc" Mar 20 17:08:51 crc kubenswrapper[4730]: I0320 17:08:51.653428 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tzjmg" Mar 20 17:08:51 crc kubenswrapper[4730]: I0320 17:08:51.659134 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tzjmg"] Mar 20 17:08:51 crc kubenswrapper[4730]: I0320 17:08:51.740234 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a7390a1-d2ba-45fd-86f5-173f814c93a9-utilities\") pod \"redhat-operators-tzjmg\" (UID: \"4a7390a1-d2ba-45fd-86f5-173f814c93a9\") " pod="openshift-marketplace/redhat-operators-tzjmg" Mar 20 17:08:51 crc kubenswrapper[4730]: I0320 17:08:51.740479 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chjph\" (UniqueName: \"kubernetes.io/projected/4a7390a1-d2ba-45fd-86f5-173f814c93a9-kube-api-access-chjph\") pod \"redhat-operators-tzjmg\" (UID: \"4a7390a1-d2ba-45fd-86f5-173f814c93a9\") " pod="openshift-marketplace/redhat-operators-tzjmg" Mar 20 17:08:51 crc kubenswrapper[4730]: I0320 17:08:51.740650 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a7390a1-d2ba-45fd-86f5-173f814c93a9-catalog-content\") pod \"redhat-operators-tzjmg\" (UID: \"4a7390a1-d2ba-45fd-86f5-173f814c93a9\") " pod="openshift-marketplace/redhat-operators-tzjmg" Mar 20 17:08:51 crc kubenswrapper[4730]: I0320 17:08:51.842841 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a7390a1-d2ba-45fd-86f5-173f814c93a9-catalog-content\") pod \"redhat-operators-tzjmg\" (UID: \"4a7390a1-d2ba-45fd-86f5-173f814c93a9\") " pod="openshift-marketplace/redhat-operators-tzjmg" Mar 20 17:08:51 crc kubenswrapper[4730]: I0320 17:08:51.843028 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a7390a1-d2ba-45fd-86f5-173f814c93a9-utilities\") pod \"redhat-operators-tzjmg\" (UID: \"4a7390a1-d2ba-45fd-86f5-173f814c93a9\") " pod="openshift-marketplace/redhat-operators-tzjmg" Mar 20 17:08:51 crc kubenswrapper[4730]: I0320 17:08:51.843086 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chjph\" (UniqueName: \"kubernetes.io/projected/4a7390a1-d2ba-45fd-86f5-173f814c93a9-kube-api-access-chjph\") pod \"redhat-operators-tzjmg\" (UID: \"4a7390a1-d2ba-45fd-86f5-173f814c93a9\") " pod="openshift-marketplace/redhat-operators-tzjmg" Mar 20 17:08:51 crc kubenswrapper[4730]: I0320 17:08:51.843949 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a7390a1-d2ba-45fd-86f5-173f814c93a9-utilities\") pod \"redhat-operators-tzjmg\" (UID: \"4a7390a1-d2ba-45fd-86f5-173f814c93a9\") " pod="openshift-marketplace/redhat-operators-tzjmg" Mar 20 17:08:51 crc kubenswrapper[4730]: I0320 17:08:51.843965 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a7390a1-d2ba-45fd-86f5-173f814c93a9-catalog-content\") pod \"redhat-operators-tzjmg\" (UID: \"4a7390a1-d2ba-45fd-86f5-173f814c93a9\") " pod="openshift-marketplace/redhat-operators-tzjmg" Mar 20 17:08:51 crc kubenswrapper[4730]: I0320 17:08:51.863707 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chjph\" (UniqueName: \"kubernetes.io/projected/4a7390a1-d2ba-45fd-86f5-173f814c93a9-kube-api-access-chjph\") pod \"redhat-operators-tzjmg\" (UID: \"4a7390a1-d2ba-45fd-86f5-173f814c93a9\") " pod="openshift-marketplace/redhat-operators-tzjmg" Mar 20 17:08:51 crc kubenswrapper[4730]: I0320 17:08:51.989725 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tzjmg" Mar 20 17:08:52 crc kubenswrapper[4730]: I0320 17:08:52.475207 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tzjmg"] Mar 20 17:08:52 crc kubenswrapper[4730]: I0320 17:08:52.924700 4730 generic.go:334] "Generic (PLEG): container finished" podID="4a7390a1-d2ba-45fd-86f5-173f814c93a9" containerID="85b46985717f5402312da3ba16d7344bfe930fc2131a84e29d94e9ed5b0900e9" exitCode=0 Mar 20 17:08:52 crc kubenswrapper[4730]: I0320 17:08:52.924771 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzjmg" event={"ID":"4a7390a1-d2ba-45fd-86f5-173f814c93a9","Type":"ContainerDied","Data":"85b46985717f5402312da3ba16d7344bfe930fc2131a84e29d94e9ed5b0900e9"} Mar 20 17:08:52 crc kubenswrapper[4730]: I0320 17:08:52.925134 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzjmg" event={"ID":"4a7390a1-d2ba-45fd-86f5-173f814c93a9","Type":"ContainerStarted","Data":"546e0511e70de04145fcedfd822bf13d8f1a11303e7f2ed0ec7c3de19cbb37aa"} Mar 20 17:08:53 crc kubenswrapper[4730]: I0320 17:08:53.939631 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzjmg" event={"ID":"4a7390a1-d2ba-45fd-86f5-173f814c93a9","Type":"ContainerStarted","Data":"a5a3ccc441c93068e49eab5a94102c5fc521cd85b318db8bfbb39924f2f902d1"} Mar 20 17:08:54 crc kubenswrapper[4730]: I0320 17:08:54.879856 4730 scope.go:117] "RemoveContainer" containerID="bf551572383a97d7725c248e69428cf0db8c3b25722e05b3bf7b441d82bc6b56" Mar 20 17:08:58 crc kubenswrapper[4730]: I0320 17:08:58.536985 4730 scope.go:117] "RemoveContainer" containerID="581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca" Mar 20 17:08:58 crc kubenswrapper[4730]: E0320 17:08:58.538032 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:09:00 crc kubenswrapper[4730]: I0320 17:09:00.030904 4730 generic.go:334] "Generic (PLEG): container finished" podID="4a7390a1-d2ba-45fd-86f5-173f814c93a9" containerID="a5a3ccc441c93068e49eab5a94102c5fc521cd85b318db8bfbb39924f2f902d1" exitCode=0 Mar 20 17:09:00 crc kubenswrapper[4730]: I0320 17:09:00.030958 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzjmg" event={"ID":"4a7390a1-d2ba-45fd-86f5-173f814c93a9","Type":"ContainerDied","Data":"a5a3ccc441c93068e49eab5a94102c5fc521cd85b318db8bfbb39924f2f902d1"} Mar 20 17:09:01 crc kubenswrapper[4730]: I0320 17:09:01.044487 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzjmg" event={"ID":"4a7390a1-d2ba-45fd-86f5-173f814c93a9","Type":"ContainerStarted","Data":"6629ef5d1c4e362c0e58ac52b41f12978595079c6c024c9726e99d103ab25d4c"} Mar 20 17:09:01 crc kubenswrapper[4730]: I0320 17:09:01.085901 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tzjmg" podStartSLOduration=2.569202216 podStartE2EDuration="10.085867959s" podCreationTimestamp="2026-03-20 17:08:51 +0000 UTC" firstStartedPulling="2026-03-20 17:08:52.926894722 +0000 UTC m=+5392.140266091" lastFinishedPulling="2026-03-20 17:09:00.443560435 +0000 UTC m=+5399.656931834" observedRunningTime="2026-03-20 17:09:01.081236867 +0000 UTC m=+5400.294608246" watchObservedRunningTime="2026-03-20 17:09:01.085867959 +0000 UTC m=+5400.299239368" Mar 20 17:09:01 crc kubenswrapper[4730]: I0320 17:09:01.990616 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tzjmg" Mar 20 17:09:01 crc kubenswrapper[4730]: I0320 17:09:01.991937 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tzjmg" Mar 20 17:09:03 crc kubenswrapper[4730]: I0320 17:09:03.031801 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tzjmg" podUID="4a7390a1-d2ba-45fd-86f5-173f814c93a9" containerName="registry-server" probeResult="failure" output=< Mar 20 17:09:03 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Mar 20 17:09:03 crc kubenswrapper[4730]: > Mar 20 17:09:13 crc kubenswrapper[4730]: I0320 17:09:13.081579 4730 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tzjmg" podUID="4a7390a1-d2ba-45fd-86f5-173f814c93a9" containerName="registry-server" probeResult="failure" output=< Mar 20 17:09:13 crc kubenswrapper[4730]: timeout: failed to connect service ":50051" within 1s Mar 20 17:09:13 crc kubenswrapper[4730]: > Mar 20 17:09:13 crc kubenswrapper[4730]: I0320 17:09:13.533571 4730 scope.go:117] "RemoveContainer" containerID="581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca" Mar 20 17:09:13 crc kubenswrapper[4730]: E0320 17:09:13.533891 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:09:22 crc kubenswrapper[4730]: I0320 17:09:22.085239 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tzjmg" Mar 20 17:09:22 crc kubenswrapper[4730]: I0320 17:09:22.175841 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tzjmg" Mar 20 17:09:22 crc kubenswrapper[4730]: I0320 17:09:22.798592 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tzjmg"] Mar 20 17:09:23 crc kubenswrapper[4730]: I0320 17:09:23.325675 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tzjmg" podUID="4a7390a1-d2ba-45fd-86f5-173f814c93a9" containerName="registry-server" containerID="cri-o://6629ef5d1c4e362c0e58ac52b41f12978595079c6c024c9726e99d103ab25d4c" gracePeriod=2 Mar 20 17:09:24 crc kubenswrapper[4730]: I0320 17:09:24.289108 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tzjmg" Mar 20 17:09:24 crc kubenswrapper[4730]: I0320 17:09:24.337547 4730 generic.go:334] "Generic (PLEG): container finished" podID="4a7390a1-d2ba-45fd-86f5-173f814c93a9" containerID="6629ef5d1c4e362c0e58ac52b41f12978595079c6c024c9726e99d103ab25d4c" exitCode=0 Mar 20 17:09:24 crc kubenswrapper[4730]: I0320 17:09:24.337583 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzjmg" event={"ID":"4a7390a1-d2ba-45fd-86f5-173f814c93a9","Type":"ContainerDied","Data":"6629ef5d1c4e362c0e58ac52b41f12978595079c6c024c9726e99d103ab25d4c"} Mar 20 17:09:24 crc kubenswrapper[4730]: I0320 17:09:24.337607 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzjmg" event={"ID":"4a7390a1-d2ba-45fd-86f5-173f814c93a9","Type":"ContainerDied","Data":"546e0511e70de04145fcedfd822bf13d8f1a11303e7f2ed0ec7c3de19cbb37aa"} Mar 20 17:09:24 crc kubenswrapper[4730]: I0320 17:09:24.337623 4730 scope.go:117] "RemoveContainer" containerID="6629ef5d1c4e362c0e58ac52b41f12978595079c6c024c9726e99d103ab25d4c" Mar 20 17:09:24 crc kubenswrapper[4730]: I0320 17:09:24.337739 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tzjmg" Mar 20 17:09:24 crc kubenswrapper[4730]: I0320 17:09:24.373588 4730 scope.go:117] "RemoveContainer" containerID="a5a3ccc441c93068e49eab5a94102c5fc521cd85b318db8bfbb39924f2f902d1" Mar 20 17:09:24 crc kubenswrapper[4730]: I0320 17:09:24.411534 4730 scope.go:117] "RemoveContainer" containerID="85b46985717f5402312da3ba16d7344bfe930fc2131a84e29d94e9ed5b0900e9" Mar 20 17:09:24 crc kubenswrapper[4730]: I0320 17:09:24.445203 4730 scope.go:117] "RemoveContainer" containerID="6629ef5d1c4e362c0e58ac52b41f12978595079c6c024c9726e99d103ab25d4c" Mar 20 17:09:24 crc kubenswrapper[4730]: E0320 17:09:24.445648 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6629ef5d1c4e362c0e58ac52b41f12978595079c6c024c9726e99d103ab25d4c\": container with ID starting with 6629ef5d1c4e362c0e58ac52b41f12978595079c6c024c9726e99d103ab25d4c not found: ID does not exist" containerID="6629ef5d1c4e362c0e58ac52b41f12978595079c6c024c9726e99d103ab25d4c" Mar 20 17:09:24 crc kubenswrapper[4730]: I0320 17:09:24.445681 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6629ef5d1c4e362c0e58ac52b41f12978595079c6c024c9726e99d103ab25d4c"} err="failed to get container status \"6629ef5d1c4e362c0e58ac52b41f12978595079c6c024c9726e99d103ab25d4c\": rpc error: code = NotFound desc = could not find container \"6629ef5d1c4e362c0e58ac52b41f12978595079c6c024c9726e99d103ab25d4c\": container with ID starting with 6629ef5d1c4e362c0e58ac52b41f12978595079c6c024c9726e99d103ab25d4c not found: ID does not exist" Mar 20 17:09:24 crc kubenswrapper[4730]: I0320 17:09:24.445703 4730 scope.go:117] "RemoveContainer" containerID="a5a3ccc441c93068e49eab5a94102c5fc521cd85b318db8bfbb39924f2f902d1" Mar 20 17:09:24 crc kubenswrapper[4730]: E0320 17:09:24.446123 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5a3ccc441c93068e49eab5a94102c5fc521cd85b318db8bfbb39924f2f902d1\": container with ID starting with a5a3ccc441c93068e49eab5a94102c5fc521cd85b318db8bfbb39924f2f902d1 not found: ID does not exist" containerID="a5a3ccc441c93068e49eab5a94102c5fc521cd85b318db8bfbb39924f2f902d1" Mar 20 17:09:24 crc kubenswrapper[4730]: I0320 17:09:24.446160 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5a3ccc441c93068e49eab5a94102c5fc521cd85b318db8bfbb39924f2f902d1"} err="failed to get container status \"a5a3ccc441c93068e49eab5a94102c5fc521cd85b318db8bfbb39924f2f902d1\": rpc error: code = NotFound desc = could not find container \"a5a3ccc441c93068e49eab5a94102c5fc521cd85b318db8bfbb39924f2f902d1\": container with ID starting with a5a3ccc441c93068e49eab5a94102c5fc521cd85b318db8bfbb39924f2f902d1 not found: ID does not exist" Mar 20 17:09:24 crc kubenswrapper[4730]: I0320 17:09:24.446186 4730 scope.go:117] "RemoveContainer" containerID="85b46985717f5402312da3ba16d7344bfe930fc2131a84e29d94e9ed5b0900e9" Mar 20 17:09:24 crc kubenswrapper[4730]: E0320 17:09:24.447315 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85b46985717f5402312da3ba16d7344bfe930fc2131a84e29d94e9ed5b0900e9\": container with ID starting with 85b46985717f5402312da3ba16d7344bfe930fc2131a84e29d94e9ed5b0900e9 not found: ID does not exist" containerID="85b46985717f5402312da3ba16d7344bfe930fc2131a84e29d94e9ed5b0900e9" Mar 20 17:09:24 crc kubenswrapper[4730]: I0320 17:09:24.447373 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85b46985717f5402312da3ba16d7344bfe930fc2131a84e29d94e9ed5b0900e9"} err="failed to get container status \"85b46985717f5402312da3ba16d7344bfe930fc2131a84e29d94e9ed5b0900e9\": rpc error: code = NotFound desc = could not find container \"85b46985717f5402312da3ba16d7344bfe930fc2131a84e29d94e9ed5b0900e9\": container with ID starting with 85b46985717f5402312da3ba16d7344bfe930fc2131a84e29d94e9ed5b0900e9 not found: ID does not exist" Mar 20 17:09:24 crc kubenswrapper[4730]: I0320 17:09:24.462577 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a7390a1-d2ba-45fd-86f5-173f814c93a9-utilities\") pod \"4a7390a1-d2ba-45fd-86f5-173f814c93a9\" (UID: \"4a7390a1-d2ba-45fd-86f5-173f814c93a9\") " Mar 20 17:09:24 crc kubenswrapper[4730]: I0320 17:09:24.462722 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a7390a1-d2ba-45fd-86f5-173f814c93a9-catalog-content\") pod \"4a7390a1-d2ba-45fd-86f5-173f814c93a9\" (UID: \"4a7390a1-d2ba-45fd-86f5-173f814c93a9\") " Mar 20 17:09:24 crc kubenswrapper[4730]: I0320 17:09:24.462768 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chjph\" (UniqueName: \"kubernetes.io/projected/4a7390a1-d2ba-45fd-86f5-173f814c93a9-kube-api-access-chjph\") pod \"4a7390a1-d2ba-45fd-86f5-173f814c93a9\" (UID: \"4a7390a1-d2ba-45fd-86f5-173f814c93a9\") " Mar 20 17:09:24 crc kubenswrapper[4730]: I0320 17:09:24.463694 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a7390a1-d2ba-45fd-86f5-173f814c93a9-utilities" (OuterVolumeSpecName: "utilities") pod "4a7390a1-d2ba-45fd-86f5-173f814c93a9" (UID: "4a7390a1-d2ba-45fd-86f5-173f814c93a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 17:09:24 crc kubenswrapper[4730]: I0320 17:09:24.469442 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a7390a1-d2ba-45fd-86f5-173f814c93a9-kube-api-access-chjph" (OuterVolumeSpecName: "kube-api-access-chjph") pod "4a7390a1-d2ba-45fd-86f5-173f814c93a9" (UID: "4a7390a1-d2ba-45fd-86f5-173f814c93a9"). InnerVolumeSpecName "kube-api-access-chjph". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 17:09:24 crc kubenswrapper[4730]: I0320 17:09:24.565089 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chjph\" (UniqueName: \"kubernetes.io/projected/4a7390a1-d2ba-45fd-86f5-173f814c93a9-kube-api-access-chjph\") on node \"crc\" DevicePath \"\"" Mar 20 17:09:24 crc kubenswrapper[4730]: I0320 17:09:24.565443 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a7390a1-d2ba-45fd-86f5-173f814c93a9-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 17:09:24 crc kubenswrapper[4730]: I0320 17:09:24.583993 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a7390a1-d2ba-45fd-86f5-173f814c93a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a7390a1-d2ba-45fd-86f5-173f814c93a9" (UID: "4a7390a1-d2ba-45fd-86f5-173f814c93a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 17:09:24 crc kubenswrapper[4730]: I0320 17:09:24.673475 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a7390a1-d2ba-45fd-86f5-173f814c93a9-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 17:09:24 crc kubenswrapper[4730]: I0320 17:09:24.682494 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tzjmg"] Mar 20 17:09:24 crc kubenswrapper[4730]: I0320 17:09:24.691820 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tzjmg"] Mar 20 17:09:25 crc kubenswrapper[4730]: I0320 17:09:25.578368 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a7390a1-d2ba-45fd-86f5-173f814c93a9" path="/var/lib/kubelet/pods/4a7390a1-d2ba-45fd-86f5-173f814c93a9/volumes" Mar 20 17:09:26 crc kubenswrapper[4730]: I0320 17:09:26.534104 4730 scope.go:117] "RemoveContainer" containerID="581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca" Mar 20 17:09:26 crc kubenswrapper[4730]: E0320 17:09:26.535190 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:09:40 crc kubenswrapper[4730]: I0320 17:09:40.532960 4730 scope.go:117] "RemoveContainer" containerID="581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca" Mar 20 17:09:40 crc kubenswrapper[4730]: E0320 17:09:40.534020 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:09:53 crc kubenswrapper[4730]: I0320 17:09:53.533675 4730 scope.go:117] "RemoveContainer" containerID="581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca" Mar 20 17:09:53 crc kubenswrapper[4730]: E0320 17:09:53.535077 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:09:55 crc kubenswrapper[4730]: I0320 17:09:55.004324 4730 scope.go:117] "RemoveContainer" containerID="7cfa3c4c40647b5e6e0d0655d8ae502be7742a62b0dcf48099fd5c14403c3160" Mar 20 17:09:55 crc kubenswrapper[4730]: I0320 17:09:55.733557 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dg9kw"] Mar 20 17:09:55 crc kubenswrapper[4730]: E0320 17:09:55.734522 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a7390a1-d2ba-45fd-86f5-173f814c93a9" containerName="extract-utilities" Mar 20 17:09:55 crc kubenswrapper[4730]: I0320 17:09:55.734546 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a7390a1-d2ba-45fd-86f5-173f814c93a9" containerName="extract-utilities" Mar 20 17:09:55 crc kubenswrapper[4730]: E0320 17:09:55.734581 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a7390a1-d2ba-45fd-86f5-173f814c93a9" containerName="extract-content" Mar 20 17:09:55 crc kubenswrapper[4730]: I0320 17:09:55.734594 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a7390a1-d2ba-45fd-86f5-173f814c93a9" containerName="extract-content" Mar 20 17:09:55 crc kubenswrapper[4730]: E0320 17:09:55.734618 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a7390a1-d2ba-45fd-86f5-173f814c93a9" containerName="registry-server" Mar 20 17:09:55 crc kubenswrapper[4730]: I0320 17:09:55.734630 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a7390a1-d2ba-45fd-86f5-173f814c93a9" containerName="registry-server" Mar 20 17:09:55 crc kubenswrapper[4730]: I0320 17:09:55.734895 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a7390a1-d2ba-45fd-86f5-173f814c93a9" containerName="registry-server" Mar 20 17:09:55 crc kubenswrapper[4730]: I0320 17:09:55.736934 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dg9kw" Mar 20 17:09:55 crc kubenswrapper[4730]: I0320 17:09:55.747861 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dg9kw"] Mar 20 17:09:55 crc kubenswrapper[4730]: I0320 17:09:55.786682 4730 generic.go:334] "Generic (PLEG): container finished" podID="4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d" containerID="5d6c90f5aec42032ef5d1044f85f5311f53e8138aa56ffcadeca5419bfb41e2e" exitCode=0 Mar 20 17:09:55 crc kubenswrapper[4730]: I0320 17:09:55.786722 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zs57x/must-gather-nxsd6" event={"ID":"4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d","Type":"ContainerDied","Data":"5d6c90f5aec42032ef5d1044f85f5311f53e8138aa56ffcadeca5419bfb41e2e"} Mar 20 17:09:55 crc kubenswrapper[4730]: I0320 17:09:55.787617 4730 scope.go:117] "RemoveContainer" containerID="5d6c90f5aec42032ef5d1044f85f5311f53e8138aa56ffcadeca5419bfb41e2e" Mar 20 17:09:55 crc kubenswrapper[4730]: I0320 17:09:55.888453 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/241b7a69-ba82-4fbd-afd9-edc9cab27f9d-utilities\") pod \"certified-operators-dg9kw\" (UID: \"241b7a69-ba82-4fbd-afd9-edc9cab27f9d\") " pod="openshift-marketplace/certified-operators-dg9kw" Mar 20 17:09:55 crc kubenswrapper[4730]: I0320 17:09:55.888713 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r58b8\" (UniqueName: \"kubernetes.io/projected/241b7a69-ba82-4fbd-afd9-edc9cab27f9d-kube-api-access-r58b8\") pod \"certified-operators-dg9kw\" (UID: \"241b7a69-ba82-4fbd-afd9-edc9cab27f9d\") " pod="openshift-marketplace/certified-operators-dg9kw" Mar 20 17:09:55 crc kubenswrapper[4730]: I0320 17:09:55.888889 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/241b7a69-ba82-4fbd-afd9-edc9cab27f9d-catalog-content\") pod \"certified-operators-dg9kw\" (UID: \"241b7a69-ba82-4fbd-afd9-edc9cab27f9d\") " pod="openshift-marketplace/certified-operators-dg9kw" Mar 20 17:09:55 crc kubenswrapper[4730]: I0320 17:09:55.991193 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r58b8\" (UniqueName: \"kubernetes.io/projected/241b7a69-ba82-4fbd-afd9-edc9cab27f9d-kube-api-access-r58b8\") pod \"certified-operators-dg9kw\" (UID: \"241b7a69-ba82-4fbd-afd9-edc9cab27f9d\") " pod="openshift-marketplace/certified-operators-dg9kw" Mar 20 17:09:55 crc kubenswrapper[4730]: I0320 17:09:55.991357 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/241b7a69-ba82-4fbd-afd9-edc9cab27f9d-catalog-content\") pod \"certified-operators-dg9kw\" (UID: \"241b7a69-ba82-4fbd-afd9-edc9cab27f9d\") " pod="openshift-marketplace/certified-operators-dg9kw" Mar 20 17:09:55 crc kubenswrapper[4730]: I0320 17:09:55.991448 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/241b7a69-ba82-4fbd-afd9-edc9cab27f9d-utilities\") pod \"certified-operators-dg9kw\" (UID: \"241b7a69-ba82-4fbd-afd9-edc9cab27f9d\") " pod="openshift-marketplace/certified-operators-dg9kw" Mar 20 17:09:55 crc kubenswrapper[4730]: I0320 17:09:55.991820 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/241b7a69-ba82-4fbd-afd9-edc9cab27f9d-catalog-content\") pod \"certified-operators-dg9kw\" (UID: \"241b7a69-ba82-4fbd-afd9-edc9cab27f9d\") " pod="openshift-marketplace/certified-operators-dg9kw" Mar 20 17:09:55 crc kubenswrapper[4730]: I0320 17:09:55.991864 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/241b7a69-ba82-4fbd-afd9-edc9cab27f9d-utilities\") pod \"certified-operators-dg9kw\" (UID: \"241b7a69-ba82-4fbd-afd9-edc9cab27f9d\") " pod="openshift-marketplace/certified-operators-dg9kw" Mar 20 17:09:56 crc kubenswrapper[4730]: I0320 17:09:56.016012 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r58b8\" (UniqueName: \"kubernetes.io/projected/241b7a69-ba82-4fbd-afd9-edc9cab27f9d-kube-api-access-r58b8\") pod \"certified-operators-dg9kw\" (UID: \"241b7a69-ba82-4fbd-afd9-edc9cab27f9d\") " pod="openshift-marketplace/certified-operators-dg9kw" Mar 20 17:09:56 crc kubenswrapper[4730]: I0320 17:09:56.063494 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dg9kw" Mar 20 17:09:56 crc kubenswrapper[4730]: I0320 17:09:56.714002 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dg9kw"] Mar 20 17:09:56 crc kubenswrapper[4730]: I0320 17:09:56.798865 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dg9kw" event={"ID":"241b7a69-ba82-4fbd-afd9-edc9cab27f9d","Type":"ContainerStarted","Data":"f937314b5ce57ec36a9a4339d1db04b7a002d434849520857f0973e48a5279bd"} Mar 20 17:09:56 crc kubenswrapper[4730]: I0320 17:09:56.809239 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zs57x_must-gather-nxsd6_4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d/gather/0.log" Mar 20 17:09:57 crc kubenswrapper[4730]: I0320 17:09:57.813558 4730 generic.go:334] "Generic (PLEG): container finished" podID="241b7a69-ba82-4fbd-afd9-edc9cab27f9d" containerID="0dc78a3fe7ef0500ac5343944f1eb8adf85cffd267d7b92eb1e3793b7c2af604" exitCode=0 Mar 20 17:09:57 crc kubenswrapper[4730]: I0320 17:09:57.813993 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dg9kw" event={"ID":"241b7a69-ba82-4fbd-afd9-edc9cab27f9d","Type":"ContainerDied","Data":"0dc78a3fe7ef0500ac5343944f1eb8adf85cffd267d7b92eb1e3793b7c2af604"} Mar 20 17:09:57 crc kubenswrapper[4730]: I0320 17:09:57.816776 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 17:09:58 crc kubenswrapper[4730]: I0320 17:09:58.842595 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dg9kw" event={"ID":"241b7a69-ba82-4fbd-afd9-edc9cab27f9d","Type":"ContainerStarted","Data":"ad765e8c581dc65afc67f5111a021207a44884a645e6cb7483821431a43439f9"} Mar 20 17:10:00 crc kubenswrapper[4730]: I0320 17:10:00.159827 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567110-n8lgt"] Mar 20 17:10:00 crc kubenswrapper[4730]: I0320 17:10:00.162953 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567110-n8lgt" Mar 20 17:10:00 crc kubenswrapper[4730]: I0320 17:10:00.168612 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567110-n8lgt"] Mar 20 17:10:00 crc kubenswrapper[4730]: I0320 17:10:00.226683 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 17:10:00 crc kubenswrapper[4730]: I0320 17:10:00.226716 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 17:10:00 crc kubenswrapper[4730]: I0320 17:10:00.229461 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 17:10:00 crc kubenswrapper[4730]: I0320 17:10:00.331605 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpckb\" (UniqueName: \"kubernetes.io/projected/fe10df03-e254-4075-9487-78370bdbdf87-kube-api-access-tpckb\") pod \"auto-csr-approver-29567110-n8lgt\" (UID: \"fe10df03-e254-4075-9487-78370bdbdf87\") " pod="openshift-infra/auto-csr-approver-29567110-n8lgt" Mar 20 17:10:00 crc kubenswrapper[4730]: I0320 17:10:00.434048 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpckb\" (UniqueName: \"kubernetes.io/projected/fe10df03-e254-4075-9487-78370bdbdf87-kube-api-access-tpckb\") pod \"auto-csr-approver-29567110-n8lgt\" (UID: \"fe10df03-e254-4075-9487-78370bdbdf87\") " pod="openshift-infra/auto-csr-approver-29567110-n8lgt" Mar 20 17:10:00 crc kubenswrapper[4730]: I0320 17:10:00.462156 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpckb\" (UniqueName: \"kubernetes.io/projected/fe10df03-e254-4075-9487-78370bdbdf87-kube-api-access-tpckb\") pod \"auto-csr-approver-29567110-n8lgt\" (UID: \"fe10df03-e254-4075-9487-78370bdbdf87\") " pod="openshift-infra/auto-csr-approver-29567110-n8lgt" Mar 20 17:10:00 crc kubenswrapper[4730]: I0320 17:10:00.560070 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567110-n8lgt" Mar 20 17:10:00 crc kubenswrapper[4730]: I0320 17:10:00.864723 4730 generic.go:334] "Generic (PLEG): container finished" podID="241b7a69-ba82-4fbd-afd9-edc9cab27f9d" containerID="ad765e8c581dc65afc67f5111a021207a44884a645e6cb7483821431a43439f9" exitCode=0 Mar 20 17:10:00 crc kubenswrapper[4730]: I0320 17:10:00.864776 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dg9kw" event={"ID":"241b7a69-ba82-4fbd-afd9-edc9cab27f9d","Type":"ContainerDied","Data":"ad765e8c581dc65afc67f5111a021207a44884a645e6cb7483821431a43439f9"} Mar 20 17:10:01 crc kubenswrapper[4730]: I0320 17:10:01.093799 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567110-n8lgt"] Mar 20 17:10:01 crc kubenswrapper[4730]: W0320 17:10:01.104281 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe10df03_e254_4075_9487_78370bdbdf87.slice/crio-72ad172115d00ed0027c14d609644a41e503a9fc6c051d5d3569a5ad175ed0ad WatchSource:0}: Error finding container 72ad172115d00ed0027c14d609644a41e503a9fc6c051d5d3569a5ad175ed0ad: Status 404 returned error can't find the container with id 72ad172115d00ed0027c14d609644a41e503a9fc6c051d5d3569a5ad175ed0ad Mar 20 17:10:01 crc kubenswrapper[4730]: I0320 17:10:01.882560 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567110-n8lgt" event={"ID":"fe10df03-e254-4075-9487-78370bdbdf87","Type":"ContainerStarted","Data":"72ad172115d00ed0027c14d609644a41e503a9fc6c051d5d3569a5ad175ed0ad"} Mar 20 17:10:02 crc kubenswrapper[4730]: I0320 17:10:02.892237 4730 generic.go:334] "Generic (PLEG): container finished" podID="fe10df03-e254-4075-9487-78370bdbdf87" containerID="2423acdd77361bd88556503f198911c6f5c67615d5fb3a870a16f2bcad32e4e8" exitCode=0 Mar 20 17:10:02 crc kubenswrapper[4730]: I0320 17:10:02.892349 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567110-n8lgt" event={"ID":"fe10df03-e254-4075-9487-78370bdbdf87","Type":"ContainerDied","Data":"2423acdd77361bd88556503f198911c6f5c67615d5fb3a870a16f2bcad32e4e8"} Mar 20 17:10:02 crc kubenswrapper[4730]: I0320 17:10:02.895085 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dg9kw" event={"ID":"241b7a69-ba82-4fbd-afd9-edc9cab27f9d","Type":"ContainerStarted","Data":"9d7e5bf4b6a562f1ed448cda6789c8a058112f6f09b47cec2a321e00fe2d25e9"} Mar 20 17:10:02 crc kubenswrapper[4730]: I0320 17:10:02.936937 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dg9kw" podStartSLOduration=4.44623531 podStartE2EDuration="7.936920314s" podCreationTimestamp="2026-03-20 17:09:55 +0000 UTC" firstStartedPulling="2026-03-20 17:09:57.816448456 +0000 UTC m=+5457.029819825" lastFinishedPulling="2026-03-20 17:10:01.30713346 +0000 UTC m=+5460.520504829" observedRunningTime="2026-03-20 17:10:02.928155814 +0000 UTC m=+5462.141527183" watchObservedRunningTime="2026-03-20 17:10:02.936920314 +0000 UTC m=+5462.150291683" Mar 20 17:10:04 crc kubenswrapper[4730]: I0320 17:10:04.532488 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567110-n8lgt" Mar 20 17:10:04 crc kubenswrapper[4730]: I0320 17:10:04.730823 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpckb\" (UniqueName: \"kubernetes.io/projected/fe10df03-e254-4075-9487-78370bdbdf87-kube-api-access-tpckb\") pod \"fe10df03-e254-4075-9487-78370bdbdf87\" (UID: \"fe10df03-e254-4075-9487-78370bdbdf87\") " Mar 20 17:10:04 crc kubenswrapper[4730]: I0320 17:10:04.736615 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe10df03-e254-4075-9487-78370bdbdf87-kube-api-access-tpckb" (OuterVolumeSpecName: "kube-api-access-tpckb") pod "fe10df03-e254-4075-9487-78370bdbdf87" (UID: "fe10df03-e254-4075-9487-78370bdbdf87"). InnerVolumeSpecName "kube-api-access-tpckb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 17:10:04 crc kubenswrapper[4730]: I0320 17:10:04.834478 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpckb\" (UniqueName: \"kubernetes.io/projected/fe10df03-e254-4075-9487-78370bdbdf87-kube-api-access-tpckb\") on node \"crc\" DevicePath \"\"" Mar 20 17:10:04 crc kubenswrapper[4730]: I0320 17:10:04.925777 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567110-n8lgt" event={"ID":"fe10df03-e254-4075-9487-78370bdbdf87","Type":"ContainerDied","Data":"72ad172115d00ed0027c14d609644a41e503a9fc6c051d5d3569a5ad175ed0ad"} Mar 20 17:10:04 crc kubenswrapper[4730]: I0320 17:10:04.925865 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72ad172115d00ed0027c14d609644a41e503a9fc6c051d5d3569a5ad175ed0ad" Mar 20 17:10:04 crc kubenswrapper[4730]: I0320 17:10:04.925814 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567110-n8lgt" Mar 20 17:10:05 crc kubenswrapper[4730]: I0320 17:10:05.609880 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567104-xfdqf"] Mar 20 17:10:05 crc kubenswrapper[4730]: I0320 17:10:05.626647 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567104-xfdqf"] Mar 20 17:10:06 crc kubenswrapper[4730]: I0320 17:10:06.033106 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zs57x/must-gather-nxsd6"] Mar 20 17:10:06 crc kubenswrapper[4730]: I0320 17:10:06.033380 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-zs57x/must-gather-nxsd6" podUID="4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d" containerName="copy" containerID="cri-o://5f9bd62b83471aba96f37df030e3e1c5b574f8d8437b38bdf436e67eecd3f71d" gracePeriod=2 Mar 20 17:10:06 crc kubenswrapper[4730]: I0320 17:10:06.045581 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zs57x/must-gather-nxsd6"] Mar 20 17:10:06 crc kubenswrapper[4730]: I0320 17:10:06.063959 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dg9kw" Mar 20 17:10:06 crc kubenswrapper[4730]: I0320 17:10:06.064016 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dg9kw" Mar 20 17:10:06 crc kubenswrapper[4730]: I0320 17:10:06.110733 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dg9kw" Mar 20 17:10:06 crc kubenswrapper[4730]: I0320 17:10:06.533351 4730 scope.go:117] "RemoveContainer" containerID="581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca" Mar 20 17:10:06 crc kubenswrapper[4730]: E0320 17:10:06.534367 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:10:06 crc kubenswrapper[4730]: I0320 17:10:06.599502 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zs57x_must-gather-nxsd6_4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d/copy/0.log" Mar 20 17:10:06 crc kubenswrapper[4730]: I0320 17:10:06.599899 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zs57x/must-gather-nxsd6" Mar 20 17:10:06 crc kubenswrapper[4730]: I0320 17:10:06.769141 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d-must-gather-output\") pod \"4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d\" (UID: \"4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d\") " Mar 20 17:10:06 crc kubenswrapper[4730]: I0320 17:10:06.769207 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrzpq\" (UniqueName: \"kubernetes.io/projected/4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d-kube-api-access-rrzpq\") pod \"4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d\" (UID: \"4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d\") " Mar 20 17:10:06 crc kubenswrapper[4730]: I0320 17:10:06.776654 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d-kube-api-access-rrzpq" (OuterVolumeSpecName: "kube-api-access-rrzpq") pod "4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d" (UID: "4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d"). InnerVolumeSpecName "kube-api-access-rrzpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 17:10:06 crc kubenswrapper[4730]: I0320 17:10:06.871859 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrzpq\" (UniqueName: \"kubernetes.io/projected/4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d-kube-api-access-rrzpq\") on node \"crc\" DevicePath \"\"" Mar 20 17:10:06 crc kubenswrapper[4730]: I0320 17:10:06.946900 4730 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zs57x_must-gather-nxsd6_4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d/copy/0.log" Mar 20 17:10:06 crc kubenswrapper[4730]: I0320 17:10:06.947270 4730 generic.go:334] "Generic (PLEG): container finished" podID="4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d" containerID="5f9bd62b83471aba96f37df030e3e1c5b574f8d8437b38bdf436e67eecd3f71d" exitCode=143 Mar 20 17:10:06 crc kubenswrapper[4730]: I0320 17:10:06.947400 4730 scope.go:117] "RemoveContainer" containerID="5f9bd62b83471aba96f37df030e3e1c5b574f8d8437b38bdf436e67eecd3f71d" Mar 20 17:10:06 crc kubenswrapper[4730]: I0320 17:10:06.947354 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zs57x/must-gather-nxsd6" Mar 20 17:10:06 crc kubenswrapper[4730]: I0320 17:10:06.959877 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d" (UID: "4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 17:10:06 crc kubenswrapper[4730]: I0320 17:10:06.968439 4730 scope.go:117] "RemoveContainer" containerID="5d6c90f5aec42032ef5d1044f85f5311f53e8138aa56ffcadeca5419bfb41e2e" Mar 20 17:10:06 crc kubenswrapper[4730]: I0320 17:10:06.973317 4730 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 20 17:10:07 crc kubenswrapper[4730]: I0320 17:10:07.049291 4730 scope.go:117] "RemoveContainer" containerID="5f9bd62b83471aba96f37df030e3e1c5b574f8d8437b38bdf436e67eecd3f71d" Mar 20 17:10:07 crc kubenswrapper[4730]: E0320 17:10:07.049820 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f9bd62b83471aba96f37df030e3e1c5b574f8d8437b38bdf436e67eecd3f71d\": container with ID starting with 5f9bd62b83471aba96f37df030e3e1c5b574f8d8437b38bdf436e67eecd3f71d not found: ID does not exist" containerID="5f9bd62b83471aba96f37df030e3e1c5b574f8d8437b38bdf436e67eecd3f71d" Mar 20 17:10:07 crc kubenswrapper[4730]: I0320 17:10:07.049866 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f9bd62b83471aba96f37df030e3e1c5b574f8d8437b38bdf436e67eecd3f71d"} err="failed to get container status \"5f9bd62b83471aba96f37df030e3e1c5b574f8d8437b38bdf436e67eecd3f71d\": rpc error: code = NotFound desc = could not find container \"5f9bd62b83471aba96f37df030e3e1c5b574f8d8437b38bdf436e67eecd3f71d\": container with ID starting with 5f9bd62b83471aba96f37df030e3e1c5b574f8d8437b38bdf436e67eecd3f71d not found: ID does not exist" Mar 20 17:10:07 crc kubenswrapper[4730]: I0320 17:10:07.049898 4730 scope.go:117] "RemoveContainer" containerID="5d6c90f5aec42032ef5d1044f85f5311f53e8138aa56ffcadeca5419bfb41e2e" Mar 20 17:10:07 crc kubenswrapper[4730]: E0320 17:10:07.050351 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d6c90f5aec42032ef5d1044f85f5311f53e8138aa56ffcadeca5419bfb41e2e\": container with ID starting with 5d6c90f5aec42032ef5d1044f85f5311f53e8138aa56ffcadeca5419bfb41e2e not found: ID does not exist" containerID="5d6c90f5aec42032ef5d1044f85f5311f53e8138aa56ffcadeca5419bfb41e2e" Mar 20 17:10:07 crc kubenswrapper[4730]: I0320 17:10:07.050387 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d6c90f5aec42032ef5d1044f85f5311f53e8138aa56ffcadeca5419bfb41e2e"} err="failed to get container status \"5d6c90f5aec42032ef5d1044f85f5311f53e8138aa56ffcadeca5419bfb41e2e\": rpc error: code = NotFound desc = could not find container \"5d6c90f5aec42032ef5d1044f85f5311f53e8138aa56ffcadeca5419bfb41e2e\": container with ID starting with 5d6c90f5aec42032ef5d1044f85f5311f53e8138aa56ffcadeca5419bfb41e2e not found: ID does not exist" Mar 20 17:10:07 crc kubenswrapper[4730]: I0320 17:10:07.547697 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d" path="/var/lib/kubelet/pods/4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d/volumes" Mar 20 17:10:07 crc kubenswrapper[4730]: I0320 17:10:07.548783 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb4bb26e-8780-490e-b7d4-5068d41079d5" path="/var/lib/kubelet/pods/cb4bb26e-8780-490e-b7d4-5068d41079d5/volumes" Mar 20 17:10:16 crc kubenswrapper[4730]: I0320 17:10:16.383585 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dg9kw" Mar 20 17:10:16 crc kubenswrapper[4730]: I0320 17:10:16.449982 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dg9kw"] Mar 20 17:10:17 crc kubenswrapper[4730]: I0320 17:10:17.068776 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dg9kw" podUID="241b7a69-ba82-4fbd-afd9-edc9cab27f9d" containerName="registry-server" containerID="cri-o://9d7e5bf4b6a562f1ed448cda6789c8a058112f6f09b47cec2a321e00fe2d25e9" gracePeriod=2 Mar 20 17:10:17 crc kubenswrapper[4730]: I0320 17:10:17.607960 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dg9kw" Mar 20 17:10:17 crc kubenswrapper[4730]: I0320 17:10:17.629115 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/241b7a69-ba82-4fbd-afd9-edc9cab27f9d-catalog-content\") pod \"241b7a69-ba82-4fbd-afd9-edc9cab27f9d\" (UID: \"241b7a69-ba82-4fbd-afd9-edc9cab27f9d\") " Mar 20 17:10:17 crc kubenswrapper[4730]: I0320 17:10:17.629272 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r58b8\" (UniqueName: \"kubernetes.io/projected/241b7a69-ba82-4fbd-afd9-edc9cab27f9d-kube-api-access-r58b8\") pod \"241b7a69-ba82-4fbd-afd9-edc9cab27f9d\" (UID: \"241b7a69-ba82-4fbd-afd9-edc9cab27f9d\") " Mar 20 17:10:17 crc kubenswrapper[4730]: I0320 17:10:17.629306 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/241b7a69-ba82-4fbd-afd9-edc9cab27f9d-utilities\") pod \"241b7a69-ba82-4fbd-afd9-edc9cab27f9d\" (UID: \"241b7a69-ba82-4fbd-afd9-edc9cab27f9d\") " Mar 20 17:10:17 crc kubenswrapper[4730]: I0320 17:10:17.630659 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/241b7a69-ba82-4fbd-afd9-edc9cab27f9d-utilities" (OuterVolumeSpecName: "utilities") pod "241b7a69-ba82-4fbd-afd9-edc9cab27f9d" (UID: "241b7a69-ba82-4fbd-afd9-edc9cab27f9d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 17:10:17 crc kubenswrapper[4730]: I0320 17:10:17.637177 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/241b7a69-ba82-4fbd-afd9-edc9cab27f9d-kube-api-access-r58b8" (OuterVolumeSpecName: "kube-api-access-r58b8") pod "241b7a69-ba82-4fbd-afd9-edc9cab27f9d" (UID: "241b7a69-ba82-4fbd-afd9-edc9cab27f9d"). InnerVolumeSpecName "kube-api-access-r58b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 17:10:17 crc kubenswrapper[4730]: I0320 17:10:17.695781 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/241b7a69-ba82-4fbd-afd9-edc9cab27f9d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "241b7a69-ba82-4fbd-afd9-edc9cab27f9d" (UID: "241b7a69-ba82-4fbd-afd9-edc9cab27f9d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 17:10:17 crc kubenswrapper[4730]: I0320 17:10:17.731954 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/241b7a69-ba82-4fbd-afd9-edc9cab27f9d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 17:10:17 crc kubenswrapper[4730]: I0320 17:10:17.732060 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r58b8\" (UniqueName: \"kubernetes.io/projected/241b7a69-ba82-4fbd-afd9-edc9cab27f9d-kube-api-access-r58b8\") on node \"crc\" DevicePath \"\"" Mar 20 17:10:17 crc kubenswrapper[4730]: I0320 17:10:17.732120 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/241b7a69-ba82-4fbd-afd9-edc9cab27f9d-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 17:10:18 crc kubenswrapper[4730]: I0320 17:10:18.081603 4730 generic.go:334] "Generic (PLEG): container finished" podID="241b7a69-ba82-4fbd-afd9-edc9cab27f9d" containerID="9d7e5bf4b6a562f1ed448cda6789c8a058112f6f09b47cec2a321e00fe2d25e9" exitCode=0 Mar 20 17:10:18 crc kubenswrapper[4730]: I0320 17:10:18.081672 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dg9kw" event={"ID":"241b7a69-ba82-4fbd-afd9-edc9cab27f9d","Type":"ContainerDied","Data":"9d7e5bf4b6a562f1ed448cda6789c8a058112f6f09b47cec2a321e00fe2d25e9"} Mar 20 17:10:18 crc kubenswrapper[4730]: I0320 17:10:18.082012 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dg9kw" event={"ID":"241b7a69-ba82-4fbd-afd9-edc9cab27f9d","Type":"ContainerDied","Data":"f937314b5ce57ec36a9a4339d1db04b7a002d434849520857f0973e48a5279bd"} Mar 20 17:10:18 crc kubenswrapper[4730]: I0320 17:10:18.081713 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dg9kw" Mar 20 17:10:18 crc kubenswrapper[4730]: I0320 17:10:18.082082 4730 scope.go:117] "RemoveContainer" containerID="9d7e5bf4b6a562f1ed448cda6789c8a058112f6f09b47cec2a321e00fe2d25e9" Mar 20 17:10:18 crc kubenswrapper[4730]: I0320 17:10:18.120530 4730 scope.go:117] "RemoveContainer" containerID="ad765e8c581dc65afc67f5111a021207a44884a645e6cb7483821431a43439f9" Mar 20 17:10:18 crc kubenswrapper[4730]: I0320 17:10:18.133205 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dg9kw"] Mar 20 17:10:18 crc kubenswrapper[4730]: I0320 17:10:18.144356 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dg9kw"] Mar 20 17:10:18 crc kubenswrapper[4730]: I0320 17:10:18.147812 4730 scope.go:117] "RemoveContainer" containerID="0dc78a3fe7ef0500ac5343944f1eb8adf85cffd267d7b92eb1e3793b7c2af604" Mar 20 17:10:18 crc kubenswrapper[4730]: I0320 17:10:18.216096 4730 scope.go:117] "RemoveContainer" containerID="9d7e5bf4b6a562f1ed448cda6789c8a058112f6f09b47cec2a321e00fe2d25e9" Mar 20 17:10:18 crc kubenswrapper[4730]: E0320 17:10:18.216756 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d7e5bf4b6a562f1ed448cda6789c8a058112f6f09b47cec2a321e00fe2d25e9\": container with ID starting with 9d7e5bf4b6a562f1ed448cda6789c8a058112f6f09b47cec2a321e00fe2d25e9 not found: ID does not exist" containerID="9d7e5bf4b6a562f1ed448cda6789c8a058112f6f09b47cec2a321e00fe2d25e9" Mar 20 17:10:18 crc kubenswrapper[4730]: I0320 17:10:18.216805 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d7e5bf4b6a562f1ed448cda6789c8a058112f6f09b47cec2a321e00fe2d25e9"} err="failed to get container status \"9d7e5bf4b6a562f1ed448cda6789c8a058112f6f09b47cec2a321e00fe2d25e9\": rpc error: code = NotFound desc = could not find container \"9d7e5bf4b6a562f1ed448cda6789c8a058112f6f09b47cec2a321e00fe2d25e9\": container with ID starting with 9d7e5bf4b6a562f1ed448cda6789c8a058112f6f09b47cec2a321e00fe2d25e9 not found: ID does not exist" Mar 20 17:10:18 crc kubenswrapper[4730]: I0320 17:10:18.216837 4730 scope.go:117] "RemoveContainer" containerID="ad765e8c581dc65afc67f5111a021207a44884a645e6cb7483821431a43439f9" Mar 20 17:10:18 crc kubenswrapper[4730]: E0320 17:10:18.217208 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad765e8c581dc65afc67f5111a021207a44884a645e6cb7483821431a43439f9\": container with ID starting with ad765e8c581dc65afc67f5111a021207a44884a645e6cb7483821431a43439f9 not found: ID does not exist" containerID="ad765e8c581dc65afc67f5111a021207a44884a645e6cb7483821431a43439f9" Mar 20 17:10:18 crc kubenswrapper[4730]: I0320 17:10:18.217239 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad765e8c581dc65afc67f5111a021207a44884a645e6cb7483821431a43439f9"} err="failed to get container status \"ad765e8c581dc65afc67f5111a021207a44884a645e6cb7483821431a43439f9\": rpc error: code = NotFound desc = could not find container \"ad765e8c581dc65afc67f5111a021207a44884a645e6cb7483821431a43439f9\": container with ID starting with ad765e8c581dc65afc67f5111a021207a44884a645e6cb7483821431a43439f9 not found: ID does not exist" Mar 20 17:10:18 crc kubenswrapper[4730]: I0320 17:10:18.217272 4730 scope.go:117] "RemoveContainer" containerID="0dc78a3fe7ef0500ac5343944f1eb8adf85cffd267d7b92eb1e3793b7c2af604" Mar 20 17:10:18 crc kubenswrapper[4730]: E0320 17:10:18.217551 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dc78a3fe7ef0500ac5343944f1eb8adf85cffd267d7b92eb1e3793b7c2af604\": container with ID starting with 0dc78a3fe7ef0500ac5343944f1eb8adf85cffd267d7b92eb1e3793b7c2af604 not found: ID does not exist" containerID="0dc78a3fe7ef0500ac5343944f1eb8adf85cffd267d7b92eb1e3793b7c2af604" Mar 20 17:10:18 crc kubenswrapper[4730]: I0320 17:10:18.217590 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dc78a3fe7ef0500ac5343944f1eb8adf85cffd267d7b92eb1e3793b7c2af604"} err="failed to get container status \"0dc78a3fe7ef0500ac5343944f1eb8adf85cffd267d7b92eb1e3793b7c2af604\": rpc error: code = NotFound desc = could not find container \"0dc78a3fe7ef0500ac5343944f1eb8adf85cffd267d7b92eb1e3793b7c2af604\": container with ID starting with 0dc78a3fe7ef0500ac5343944f1eb8adf85cffd267d7b92eb1e3793b7c2af604 not found: ID does not exist" Mar 20 17:10:19 crc kubenswrapper[4730]: I0320 17:10:19.554853 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="241b7a69-ba82-4fbd-afd9-edc9cab27f9d" path="/var/lib/kubelet/pods/241b7a69-ba82-4fbd-afd9-edc9cab27f9d/volumes" Mar 20 17:10:21 crc kubenswrapper[4730]: I0320 17:10:21.548105 4730 scope.go:117] "RemoveContainer" containerID="581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca" Mar 20 17:10:21 crc kubenswrapper[4730]: E0320 17:10:21.548634 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:10:34 crc kubenswrapper[4730]: I0320 17:10:34.534497 4730 scope.go:117] "RemoveContainer" containerID="581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca" Mar 20 17:10:34 crc kubenswrapper[4730]: E0320 17:10:34.535713 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:10:45 crc kubenswrapper[4730]: I0320 17:10:45.533660 4730 scope.go:117] "RemoveContainer" containerID="581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca" Mar 20 17:10:45 crc kubenswrapper[4730]: E0320 17:10:45.534407 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:10:55 crc kubenswrapper[4730]: I0320 17:10:55.108882 4730 scope.go:117] "RemoveContainer" containerID="570fedc27114561def00c9fd45b45e729130f1ccdb2c50a188548ae1020d9f83" Mar 20 17:10:55 crc kubenswrapper[4730]: I0320 17:10:55.166875 4730 scope.go:117] "RemoveContainer" containerID="f2a5b11498b3565ac584ae0844e2ddcd32e123d95c3d6fe4c6c0d6653ce556a9" Mar 20 17:11:00 crc kubenswrapper[4730]: I0320 17:11:00.533370 4730 scope.go:117] "RemoveContainer" containerID="581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca" Mar 20 17:11:00 crc kubenswrapper[4730]: E0320 17:11:00.534683 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:11:15 crc kubenswrapper[4730]: I0320 17:11:15.534395 4730 scope.go:117] "RemoveContainer" containerID="581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca" Mar 20 17:11:15 crc kubenswrapper[4730]: E0320 17:11:15.535340 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:11:26 crc kubenswrapper[4730]: I0320 17:11:26.533876 4730 scope.go:117] "RemoveContainer" containerID="581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca" Mar 20 17:11:26 crc kubenswrapper[4730]: E0320 17:11:26.534871 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:11:40 crc kubenswrapper[4730]: I0320 17:11:40.534495 4730 scope.go:117] "RemoveContainer" containerID="581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca" Mar 20 17:11:40 crc kubenswrapper[4730]: E0320 17:11:40.538501 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:11:54 crc kubenswrapper[4730]: I0320 17:11:54.532864 4730 scope.go:117] "RemoveContainer" containerID="581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca" Mar 20 17:11:54 crc kubenswrapper[4730]: E0320 17:11:54.533699 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:12:00 crc kubenswrapper[4730]: I0320 17:12:00.158492 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567112-ft78g"] Mar 20 17:12:00 crc kubenswrapper[4730]: E0320 17:12:00.161952 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d" containerName="gather" Mar 20 17:12:00 crc kubenswrapper[4730]: I0320 17:12:00.162087 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d" containerName="gather" Mar 20 17:12:00 crc kubenswrapper[4730]: E0320 17:12:00.162204 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d" containerName="copy" Mar 20 17:12:00 crc kubenswrapper[4730]: I0320 17:12:00.162342 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d" containerName="copy" Mar 20 17:12:00 crc kubenswrapper[4730]: E0320 17:12:00.162463 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="241b7a69-ba82-4fbd-afd9-edc9cab27f9d" containerName="extract-content" Mar 20 17:12:00 crc kubenswrapper[4730]: I0320 17:12:00.162566 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="241b7a69-ba82-4fbd-afd9-edc9cab27f9d" containerName="extract-content" Mar 20 17:12:00 crc kubenswrapper[4730]: E0320 17:12:00.162698 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="241b7a69-ba82-4fbd-afd9-edc9cab27f9d" containerName="extract-utilities" Mar 20 17:12:00 crc kubenswrapper[4730]: I0320 17:12:00.162794 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="241b7a69-ba82-4fbd-afd9-edc9cab27f9d" containerName="extract-utilities" Mar 20 17:12:00 crc kubenswrapper[4730]: E0320 17:12:00.162919 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe10df03-e254-4075-9487-78370bdbdf87" containerName="oc" Mar 20 17:12:00 crc kubenswrapper[4730]: I0320 17:12:00.163014 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe10df03-e254-4075-9487-78370bdbdf87" containerName="oc" Mar 20 17:12:00 crc kubenswrapper[4730]: E0320 17:12:00.163123 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="241b7a69-ba82-4fbd-afd9-edc9cab27f9d" containerName="registry-server" Mar 20 17:12:00 crc kubenswrapper[4730]: I0320 17:12:00.163220 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="241b7a69-ba82-4fbd-afd9-edc9cab27f9d" containerName="registry-server" Mar 20 17:12:00 crc kubenswrapper[4730]: I0320 17:12:00.163697 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d" containerName="copy" Mar 20 17:12:00 crc kubenswrapper[4730]: I0320 17:12:00.163836 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e1ddcc0-c7cc-4670-9679-bd5ccf32f30d" containerName="gather" Mar 20 17:12:00 crc kubenswrapper[4730]: I0320 17:12:00.163964 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="241b7a69-ba82-4fbd-afd9-edc9cab27f9d" containerName="registry-server" Mar 20 17:12:00 crc kubenswrapper[4730]: I0320 17:12:00.164076 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe10df03-e254-4075-9487-78370bdbdf87" containerName="oc" Mar 20 17:12:00 crc kubenswrapper[4730]: I0320 17:12:00.165530 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567112-ft78g" Mar 20 17:12:00 crc kubenswrapper[4730]: I0320 17:12:00.169414 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 17:12:00 crc kubenswrapper[4730]: I0320 17:12:00.169501 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 17:12:00 crc kubenswrapper[4730]: I0320 17:12:00.170202 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 17:12:00 crc kubenswrapper[4730]: I0320 17:12:00.173564 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567112-ft78g"] Mar 20 17:12:00 crc kubenswrapper[4730]: I0320 17:12:00.264327 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvpv6\" (UniqueName: \"kubernetes.io/projected/2b24c66b-fc08-4f63-8ac8-15d11de2b672-kube-api-access-bvpv6\") pod \"auto-csr-approver-29567112-ft78g\" (UID: \"2b24c66b-fc08-4f63-8ac8-15d11de2b672\") " pod="openshift-infra/auto-csr-approver-29567112-ft78g" Mar 20 17:12:00 crc kubenswrapper[4730]: I0320 17:12:00.366907 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvpv6\" (UniqueName: \"kubernetes.io/projected/2b24c66b-fc08-4f63-8ac8-15d11de2b672-kube-api-access-bvpv6\") pod \"auto-csr-approver-29567112-ft78g\" (UID: \"2b24c66b-fc08-4f63-8ac8-15d11de2b672\") " pod="openshift-infra/auto-csr-approver-29567112-ft78g" Mar 20 17:12:00 crc kubenswrapper[4730]: I0320 17:12:00.392551 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvpv6\" (UniqueName: \"kubernetes.io/projected/2b24c66b-fc08-4f63-8ac8-15d11de2b672-kube-api-access-bvpv6\") pod \"auto-csr-approver-29567112-ft78g\" (UID: \"2b24c66b-fc08-4f63-8ac8-15d11de2b672\") " pod="openshift-infra/auto-csr-approver-29567112-ft78g" Mar 20 17:12:00 crc kubenswrapper[4730]: I0320 17:12:00.494469 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567112-ft78g" Mar 20 17:12:00 crc kubenswrapper[4730]: I0320 17:12:00.967970 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567112-ft78g"] Mar 20 17:12:01 crc kubenswrapper[4730]: I0320 17:12:01.339582 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567112-ft78g" event={"ID":"2b24c66b-fc08-4f63-8ac8-15d11de2b672","Type":"ContainerStarted","Data":"80ccaaf3b45a0cc48425f3e70c0a6f860a18b1e69a52e75f4d16543faafe4710"} Mar 20 17:12:03 crc kubenswrapper[4730]: I0320 17:12:03.364414 4730 generic.go:334] "Generic (PLEG): container finished" podID="2b24c66b-fc08-4f63-8ac8-15d11de2b672" containerID="443bf939ac2c50184710f05da0153e5a7905637d2ddf41d7e108e98cb3d443fe" exitCode=0 Mar 20 17:12:03 crc kubenswrapper[4730]: I0320 17:12:03.364471 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567112-ft78g" event={"ID":"2b24c66b-fc08-4f63-8ac8-15d11de2b672","Type":"ContainerDied","Data":"443bf939ac2c50184710f05da0153e5a7905637d2ddf41d7e108e98cb3d443fe"} Mar 20 17:12:04 crc kubenswrapper[4730]: I0320 17:12:04.679171 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567112-ft78g" Mar 20 17:12:04 crc kubenswrapper[4730]: I0320 17:12:04.764108 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvpv6\" (UniqueName: \"kubernetes.io/projected/2b24c66b-fc08-4f63-8ac8-15d11de2b672-kube-api-access-bvpv6\") pod \"2b24c66b-fc08-4f63-8ac8-15d11de2b672\" (UID: \"2b24c66b-fc08-4f63-8ac8-15d11de2b672\") " Mar 20 17:12:04 crc kubenswrapper[4730]: I0320 17:12:04.772325 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b24c66b-fc08-4f63-8ac8-15d11de2b672-kube-api-access-bvpv6" (OuterVolumeSpecName: "kube-api-access-bvpv6") pod "2b24c66b-fc08-4f63-8ac8-15d11de2b672" (UID: "2b24c66b-fc08-4f63-8ac8-15d11de2b672"). InnerVolumeSpecName "kube-api-access-bvpv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 17:12:04 crc kubenswrapper[4730]: I0320 17:12:04.866577 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvpv6\" (UniqueName: \"kubernetes.io/projected/2b24c66b-fc08-4f63-8ac8-15d11de2b672-kube-api-access-bvpv6\") on node \"crc\" DevicePath \"\"" Mar 20 17:12:05 crc kubenswrapper[4730]: I0320 17:12:05.383626 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567112-ft78g" event={"ID":"2b24c66b-fc08-4f63-8ac8-15d11de2b672","Type":"ContainerDied","Data":"80ccaaf3b45a0cc48425f3e70c0a6f860a18b1e69a52e75f4d16543faafe4710"} Mar 20 17:12:05 crc kubenswrapper[4730]: I0320 17:12:05.383666 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80ccaaf3b45a0cc48425f3e70c0a6f860a18b1e69a52e75f4d16543faafe4710" Mar 20 17:12:05 crc kubenswrapper[4730]: I0320 17:12:05.383699 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567112-ft78g" Mar 20 17:12:05 crc kubenswrapper[4730]: I0320 17:12:05.751516 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567106-pg9p8"] Mar 20 17:12:05 crc kubenswrapper[4730]: I0320 17:12:05.763070 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567106-pg9p8"] Mar 20 17:12:06 crc kubenswrapper[4730]: I0320 17:12:06.532863 4730 scope.go:117] "RemoveContainer" containerID="581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca" Mar 20 17:12:06 crc kubenswrapper[4730]: E0320 17:12:06.533283 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:12:07 crc kubenswrapper[4730]: I0320 17:12:07.543908 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9695c6e8-6be3-4465-95a1-887c6a568fb7" path="/var/lib/kubelet/pods/9695c6e8-6be3-4465-95a1-887c6a568fb7/volumes" Mar 20 17:12:21 crc kubenswrapper[4730]: I0320 17:12:21.551754 4730 scope.go:117] "RemoveContainer" containerID="581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca" Mar 20 17:12:21 crc kubenswrapper[4730]: E0320 17:12:21.553196 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:12:32 crc kubenswrapper[4730]: I0320 17:12:32.533873 4730 scope.go:117] "RemoveContainer" containerID="581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca" Mar 20 17:12:32 crc kubenswrapper[4730]: E0320 17:12:32.534915 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:12:47 crc kubenswrapper[4730]: I0320 17:12:47.533704 4730 scope.go:117] "RemoveContainer" containerID="581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca" Mar 20 17:12:47 crc kubenswrapper[4730]: E0320 17:12:47.534621 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:12:55 crc kubenswrapper[4730]: I0320 17:12:55.360522 4730 scope.go:117] "RemoveContainer" containerID="e24f74530672b4126d2aef0eaec17c584c50b3452f9280f1c1dd7481992b500e" Mar 20 17:12:59 crc kubenswrapper[4730]: I0320 17:12:59.533514 4730 scope.go:117] "RemoveContainer" containerID="581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca" Mar 20 17:12:59 crc kubenswrapper[4730]: E0320 17:12:59.534352 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:13:13 crc kubenswrapper[4730]: I0320 17:13:13.535043 4730 scope.go:117] "RemoveContainer" containerID="581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca" Mar 20 17:13:13 crc kubenswrapper[4730]: E0320 17:13:13.536496 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:13:24 crc kubenswrapper[4730]: I0320 17:13:24.533680 4730 scope.go:117] "RemoveContainer" containerID="581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca" Mar 20 17:13:24 crc kubenswrapper[4730]: E0320 17:13:24.534782 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:13:36 crc kubenswrapper[4730]: I0320 17:13:36.533767 4730 scope.go:117] "RemoveContainer" containerID="581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca" Mar 20 17:13:36 crc kubenswrapper[4730]: E0320 17:13:36.534752 4730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5qvf_openshift-machine-config-operator(7fcd3db3-55f1-4c23-8fa9-78844495cea3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" Mar 20 17:13:51 crc kubenswrapper[4730]: I0320 17:13:51.540112 4730 scope.go:117] "RemoveContainer" containerID="581aa02672219ffcaf2185db619f24c1c4e9f4dc71ab9a090d63ff524fc5bbca" Mar 20 17:13:51 crc kubenswrapper[4730]: I0320 17:13:51.844960 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" event={"ID":"7fcd3db3-55f1-4c23-8fa9-78844495cea3","Type":"ContainerStarted","Data":"5e057797e6cbf54310c6ad4bc172547f84470f218b7fb0b50a4e6f0a9d3a806d"} Mar 20 17:14:00 crc kubenswrapper[4730]: I0320 17:14:00.151867 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567114-j8jvd"] Mar 20 17:14:00 crc kubenswrapper[4730]: E0320 17:14:00.152791 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b24c66b-fc08-4f63-8ac8-15d11de2b672" containerName="oc" Mar 20 17:14:00 crc kubenswrapper[4730]: I0320 17:14:00.152802 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b24c66b-fc08-4f63-8ac8-15d11de2b672" containerName="oc" Mar 20 17:14:00 crc kubenswrapper[4730]: I0320 17:14:00.153004 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b24c66b-fc08-4f63-8ac8-15d11de2b672" containerName="oc" Mar 20 17:14:00 crc kubenswrapper[4730]: I0320 17:14:00.153679 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567114-j8jvd" Mar 20 17:14:00 crc kubenswrapper[4730]: I0320 17:14:00.156789 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 17:14:00 crc kubenswrapper[4730]: I0320 17:14:00.157336 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 17:14:00 crc kubenswrapper[4730]: I0320 17:14:00.157511 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 17:14:00 crc kubenswrapper[4730]: I0320 17:14:00.168034 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567114-j8jvd"] Mar 20 17:14:00 crc kubenswrapper[4730]: I0320 17:14:00.302366 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc7d7\" (UniqueName: \"kubernetes.io/projected/e7911488-99e9-4ca9-a033-cc7507544155-kube-api-access-xc7d7\") pod \"auto-csr-approver-29567114-j8jvd\" (UID: \"e7911488-99e9-4ca9-a033-cc7507544155\") " pod="openshift-infra/auto-csr-approver-29567114-j8jvd" Mar 20 17:14:00 crc kubenswrapper[4730]: I0320 17:14:00.404810 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc7d7\" (UniqueName: \"kubernetes.io/projected/e7911488-99e9-4ca9-a033-cc7507544155-kube-api-access-xc7d7\") pod \"auto-csr-approver-29567114-j8jvd\" (UID: \"e7911488-99e9-4ca9-a033-cc7507544155\") " pod="openshift-infra/auto-csr-approver-29567114-j8jvd" Mar 20 17:14:00 crc kubenswrapper[4730]: I0320 17:14:00.427946 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc7d7\" (UniqueName: \"kubernetes.io/projected/e7911488-99e9-4ca9-a033-cc7507544155-kube-api-access-xc7d7\") pod \"auto-csr-approver-29567114-j8jvd\" (UID: \"e7911488-99e9-4ca9-a033-cc7507544155\") " pod="openshift-infra/auto-csr-approver-29567114-j8jvd" Mar 20 17:14:00 crc kubenswrapper[4730]: I0320 17:14:00.472907 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567114-j8jvd" Mar 20 17:14:01 crc kubenswrapper[4730]: I0320 17:14:01.017486 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567114-j8jvd"] Mar 20 17:14:01 crc kubenswrapper[4730]: I0320 17:14:01.944407 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567114-j8jvd" event={"ID":"e7911488-99e9-4ca9-a033-cc7507544155","Type":"ContainerStarted","Data":"621449cee3581063df8522013ff4f30f2763ddb83fcdb9ac2fa11df0ff7d2a6f"} Mar 20 17:14:02 crc kubenswrapper[4730]: I0320 17:14:02.955796 4730 generic.go:334] "Generic (PLEG): container finished" podID="e7911488-99e9-4ca9-a033-cc7507544155" containerID="b12b0d9a794aa0785dabcb98dfaf9c1c155b995ea23715f6c0f0aaa1f9e1a841" exitCode=0 Mar 20 17:14:02 crc kubenswrapper[4730]: I0320 17:14:02.955901 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567114-j8jvd" event={"ID":"e7911488-99e9-4ca9-a033-cc7507544155","Type":"ContainerDied","Data":"b12b0d9a794aa0785dabcb98dfaf9c1c155b995ea23715f6c0f0aaa1f9e1a841"} Mar 20 17:14:04 crc kubenswrapper[4730]: I0320 17:14:04.341515 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567114-j8jvd" Mar 20 17:14:04 crc kubenswrapper[4730]: I0320 17:14:04.524439 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc7d7\" (UniqueName: \"kubernetes.io/projected/e7911488-99e9-4ca9-a033-cc7507544155-kube-api-access-xc7d7\") pod \"e7911488-99e9-4ca9-a033-cc7507544155\" (UID: \"e7911488-99e9-4ca9-a033-cc7507544155\") " Mar 20 17:14:04 crc kubenswrapper[4730]: I0320 17:14:04.539790 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7911488-99e9-4ca9-a033-cc7507544155-kube-api-access-xc7d7" (OuterVolumeSpecName: "kube-api-access-xc7d7") pod "e7911488-99e9-4ca9-a033-cc7507544155" (UID: "e7911488-99e9-4ca9-a033-cc7507544155"). InnerVolumeSpecName "kube-api-access-xc7d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 17:14:04 crc kubenswrapper[4730]: I0320 17:14:04.627530 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc7d7\" (UniqueName: \"kubernetes.io/projected/e7911488-99e9-4ca9-a033-cc7507544155-kube-api-access-xc7d7\") on node \"crc\" DevicePath \"\"" Mar 20 17:14:04 crc kubenswrapper[4730]: I0320 17:14:04.975410 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567114-j8jvd" event={"ID":"e7911488-99e9-4ca9-a033-cc7507544155","Type":"ContainerDied","Data":"621449cee3581063df8522013ff4f30f2763ddb83fcdb9ac2fa11df0ff7d2a6f"} Mar 20 17:14:04 crc kubenswrapper[4730]: I0320 17:14:04.975456 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="621449cee3581063df8522013ff4f30f2763ddb83fcdb9ac2fa11df0ff7d2a6f" Mar 20 17:14:04 crc kubenswrapper[4730]: I0320 17:14:04.975472 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567114-j8jvd" Mar 20 17:14:05 crc kubenswrapper[4730]: I0320 17:14:05.420330 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567108-zftgj"] Mar 20 17:14:05 crc kubenswrapper[4730]: I0320 17:14:05.430786 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567108-zftgj"] Mar 20 17:14:05 crc kubenswrapper[4730]: I0320 17:14:05.552374 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee25ae29-2b59-43fa-bee7-ff759f2b962d" path="/var/lib/kubelet/pods/ee25ae29-2b59-43fa-bee7-ff759f2b962d/volumes" Mar 20 17:14:55 crc kubenswrapper[4730]: I0320 17:14:55.787962 4730 scope.go:117] "RemoveContainer" containerID="b89905d33f72768b38a357a1f6d9426d8d5d00caccec7854337ced1d8a3cac16" Mar 20 17:15:00 crc kubenswrapper[4730]: I0320 17:15:00.171129 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567115-h44j7"] Mar 20 17:15:00 crc kubenswrapper[4730]: E0320 17:15:00.172400 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7911488-99e9-4ca9-a033-cc7507544155" containerName="oc" Mar 20 17:15:00 crc kubenswrapper[4730]: I0320 17:15:00.172421 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7911488-99e9-4ca9-a033-cc7507544155" containerName="oc" Mar 20 17:15:00 crc kubenswrapper[4730]: I0320 17:15:00.172777 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7911488-99e9-4ca9-a033-cc7507544155" containerName="oc" Mar 20 17:15:00 crc kubenswrapper[4730]: I0320 17:15:00.174009 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567115-h44j7" Mar 20 17:15:00 crc kubenswrapper[4730]: I0320 17:15:00.177136 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 20 17:15:00 crc kubenswrapper[4730]: I0320 17:15:00.186663 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567115-h44j7"] Mar 20 17:15:00 crc kubenswrapper[4730]: I0320 17:15:00.199399 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 20 17:15:00 crc kubenswrapper[4730]: I0320 17:15:00.337358 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c39e4d8b-b591-43ff-b748-b42cde14f3b4-config-volume\") pod \"collect-profiles-29567115-h44j7\" (UID: \"c39e4d8b-b591-43ff-b748-b42cde14f3b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567115-h44j7" Mar 20 17:15:00 crc kubenswrapper[4730]: I0320 17:15:00.337722 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s555p\" (UniqueName: \"kubernetes.io/projected/c39e4d8b-b591-43ff-b748-b42cde14f3b4-kube-api-access-s555p\") pod \"collect-profiles-29567115-h44j7\" (UID: \"c39e4d8b-b591-43ff-b748-b42cde14f3b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567115-h44j7" Mar 20 17:15:00 crc kubenswrapper[4730]: I0320 17:15:00.337822 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c39e4d8b-b591-43ff-b748-b42cde14f3b4-secret-volume\") pod \"collect-profiles-29567115-h44j7\" (UID: \"c39e4d8b-b591-43ff-b748-b42cde14f3b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567115-h44j7" Mar 20 17:15:00 crc kubenswrapper[4730]: I0320 17:15:00.439958 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s555p\" (UniqueName: \"kubernetes.io/projected/c39e4d8b-b591-43ff-b748-b42cde14f3b4-kube-api-access-s555p\") pod \"collect-profiles-29567115-h44j7\" (UID: \"c39e4d8b-b591-43ff-b748-b42cde14f3b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567115-h44j7" Mar 20 17:15:00 crc kubenswrapper[4730]: I0320 17:15:00.440391 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c39e4d8b-b591-43ff-b748-b42cde14f3b4-secret-volume\") pod \"collect-profiles-29567115-h44j7\" (UID: \"c39e4d8b-b591-43ff-b748-b42cde14f3b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567115-h44j7" Mar 20 17:15:00 crc kubenswrapper[4730]: I0320 17:15:00.440573 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c39e4d8b-b591-43ff-b748-b42cde14f3b4-config-volume\") pod \"collect-profiles-29567115-h44j7\" (UID: \"c39e4d8b-b591-43ff-b748-b42cde14f3b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567115-h44j7" Mar 20 17:15:00 crc kubenswrapper[4730]: I0320 17:15:00.441817 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c39e4d8b-b591-43ff-b748-b42cde14f3b4-config-volume\") pod \"collect-profiles-29567115-h44j7\" (UID: \"c39e4d8b-b591-43ff-b748-b42cde14f3b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567115-h44j7" Mar 20 17:15:00 crc kubenswrapper[4730]: I0320 17:15:00.451653 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c39e4d8b-b591-43ff-b748-b42cde14f3b4-secret-volume\") pod \"collect-profiles-29567115-h44j7\" (UID: \"c39e4d8b-b591-43ff-b748-b42cde14f3b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567115-h44j7" Mar 20 17:15:00 crc kubenswrapper[4730]: I0320 17:15:00.479672 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s555p\" (UniqueName: \"kubernetes.io/projected/c39e4d8b-b591-43ff-b748-b42cde14f3b4-kube-api-access-s555p\") pod \"collect-profiles-29567115-h44j7\" (UID: \"c39e4d8b-b591-43ff-b748-b42cde14f3b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567115-h44j7" Mar 20 17:15:00 crc kubenswrapper[4730]: I0320 17:15:00.506793 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567115-h44j7" Mar 20 17:15:01 crc kubenswrapper[4730]: I0320 17:15:01.020507 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567115-h44j7"] Mar 20 17:15:01 crc kubenswrapper[4730]: I0320 17:15:01.621860 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567115-h44j7" event={"ID":"c39e4d8b-b591-43ff-b748-b42cde14f3b4","Type":"ContainerStarted","Data":"d876fdd4508e5916e191354e6969b5440f41eff8f94aa2d13d1bc608ceaf31ba"} Mar 20 17:15:01 crc kubenswrapper[4730]: I0320 17:15:01.622471 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567115-h44j7" event={"ID":"c39e4d8b-b591-43ff-b748-b42cde14f3b4","Type":"ContainerStarted","Data":"2c4a8eb757e88191b0c1c62006784189fc622053dff0c7d3f1a8b5fd8f3c2e29"} Mar 20 17:15:01 crc kubenswrapper[4730]: I0320 17:15:01.640622 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29567115-h44j7" podStartSLOduration=1.640605579 podStartE2EDuration="1.640605579s" podCreationTimestamp="2026-03-20 17:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 17:15:01.637791479 +0000 UTC m=+5760.851162888" watchObservedRunningTime="2026-03-20 17:15:01.640605579 +0000 UTC m=+5760.853976948" Mar 20 17:15:02 crc kubenswrapper[4730]: I0320 17:15:02.639212 4730 generic.go:334] "Generic (PLEG): container finished" podID="c39e4d8b-b591-43ff-b748-b42cde14f3b4" containerID="d876fdd4508e5916e191354e6969b5440f41eff8f94aa2d13d1bc608ceaf31ba" exitCode=0 Mar 20 17:15:02 crc kubenswrapper[4730]: I0320 17:15:02.639283 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567115-h44j7" event={"ID":"c39e4d8b-b591-43ff-b748-b42cde14f3b4","Type":"ContainerDied","Data":"d876fdd4508e5916e191354e6969b5440f41eff8f94aa2d13d1bc608ceaf31ba"} Mar 20 17:15:04 crc kubenswrapper[4730]: I0320 17:15:04.072934 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567115-h44j7" Mar 20 17:15:04 crc kubenswrapper[4730]: I0320 17:15:04.134984 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s555p\" (UniqueName: \"kubernetes.io/projected/c39e4d8b-b591-43ff-b748-b42cde14f3b4-kube-api-access-s555p\") pod \"c39e4d8b-b591-43ff-b748-b42cde14f3b4\" (UID: \"c39e4d8b-b591-43ff-b748-b42cde14f3b4\") " Mar 20 17:15:04 crc kubenswrapper[4730]: I0320 17:15:04.135045 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c39e4d8b-b591-43ff-b748-b42cde14f3b4-secret-volume\") pod \"c39e4d8b-b591-43ff-b748-b42cde14f3b4\" (UID: \"c39e4d8b-b591-43ff-b748-b42cde14f3b4\") " Mar 20 17:15:04 crc kubenswrapper[4730]: I0320 17:15:04.135234 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c39e4d8b-b591-43ff-b748-b42cde14f3b4-config-volume\") pod \"c39e4d8b-b591-43ff-b748-b42cde14f3b4\" (UID: \"c39e4d8b-b591-43ff-b748-b42cde14f3b4\") " Mar 20 17:15:04 crc kubenswrapper[4730]: I0320 17:15:04.136595 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c39e4d8b-b591-43ff-b748-b42cde14f3b4-config-volume" (OuterVolumeSpecName: "config-volume") pod "c39e4d8b-b591-43ff-b748-b42cde14f3b4" (UID: "c39e4d8b-b591-43ff-b748-b42cde14f3b4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 17:15:04 crc kubenswrapper[4730]: I0320 17:15:04.142135 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c39e4d8b-b591-43ff-b748-b42cde14f3b4-kube-api-access-s555p" (OuterVolumeSpecName: "kube-api-access-s555p") pod "c39e4d8b-b591-43ff-b748-b42cde14f3b4" (UID: "c39e4d8b-b591-43ff-b748-b42cde14f3b4"). InnerVolumeSpecName "kube-api-access-s555p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 17:15:04 crc kubenswrapper[4730]: I0320 17:15:04.150863 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c39e4d8b-b591-43ff-b748-b42cde14f3b4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c39e4d8b-b591-43ff-b748-b42cde14f3b4" (UID: "c39e4d8b-b591-43ff-b748-b42cde14f3b4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 17:15:04 crc kubenswrapper[4730]: I0320 17:15:04.236616 4730 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c39e4d8b-b591-43ff-b748-b42cde14f3b4-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 17:15:04 crc kubenswrapper[4730]: I0320 17:15:04.236648 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s555p\" (UniqueName: \"kubernetes.io/projected/c39e4d8b-b591-43ff-b748-b42cde14f3b4-kube-api-access-s555p\") on node \"crc\" DevicePath \"\"" Mar 20 17:15:04 crc kubenswrapper[4730]: I0320 17:15:04.236660 4730 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c39e4d8b-b591-43ff-b748-b42cde14f3b4-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 17:15:04 crc kubenswrapper[4730]: I0320 17:15:04.686239 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567070-5g4gp"] Mar 20 17:15:04 crc kubenswrapper[4730]: I0320 17:15:04.696646 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567115-h44j7" event={"ID":"c39e4d8b-b591-43ff-b748-b42cde14f3b4","Type":"ContainerDied","Data":"2c4a8eb757e88191b0c1c62006784189fc622053dff0c7d3f1a8b5fd8f3c2e29"} Mar 20 17:15:04 crc kubenswrapper[4730]: I0320 17:15:04.696690 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c4a8eb757e88191b0c1c62006784189fc622053dff0c7d3f1a8b5fd8f3c2e29" Mar 20 17:15:04 crc kubenswrapper[4730]: I0320 17:15:04.696777 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567115-h44j7" Mar 20 17:15:04 crc kubenswrapper[4730]: I0320 17:15:04.701265 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567070-5g4gp"] Mar 20 17:15:05 crc kubenswrapper[4730]: I0320 17:15:05.550278 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c86d92cc-d42e-496f-b31c-d6c56fb441c7" path="/var/lib/kubelet/pods/c86d92cc-d42e-496f-b31c-d6c56fb441c7/volumes" Mar 20 17:15:35 crc kubenswrapper[4730]: I0320 17:15:35.153805 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bxzjx"] Mar 20 17:15:35 crc kubenswrapper[4730]: E0320 17:15:35.155725 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39e4d8b-b591-43ff-b748-b42cde14f3b4" containerName="collect-profiles" Mar 20 17:15:35 crc kubenswrapper[4730]: I0320 17:15:35.155751 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39e4d8b-b591-43ff-b748-b42cde14f3b4" containerName="collect-profiles" Mar 20 17:15:35 crc kubenswrapper[4730]: I0320 17:15:35.156357 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="c39e4d8b-b591-43ff-b748-b42cde14f3b4" containerName="collect-profiles" Mar 20 17:15:35 crc kubenswrapper[4730]: I0320 17:15:35.159148 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bxzjx" Mar 20 17:15:35 crc kubenswrapper[4730]: I0320 17:15:35.167811 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bxzjx"] Mar 20 17:15:35 crc kubenswrapper[4730]: I0320 17:15:35.255060 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/791ba4a6-f2b3-4689-9e71-0de25f4604d0-catalog-content\") pod \"community-operators-bxzjx\" (UID: \"791ba4a6-f2b3-4689-9e71-0de25f4604d0\") " pod="openshift-marketplace/community-operators-bxzjx" Mar 20 17:15:35 crc kubenswrapper[4730]: I0320 17:15:35.255198 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62hrp\" (UniqueName: \"kubernetes.io/projected/791ba4a6-f2b3-4689-9e71-0de25f4604d0-kube-api-access-62hrp\") pod \"community-operators-bxzjx\" (UID: \"791ba4a6-f2b3-4689-9e71-0de25f4604d0\") " pod="openshift-marketplace/community-operators-bxzjx" Mar 20 17:15:35 crc kubenswrapper[4730]: I0320 17:15:35.255322 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/791ba4a6-f2b3-4689-9e71-0de25f4604d0-utilities\") pod \"community-operators-bxzjx\" (UID: \"791ba4a6-f2b3-4689-9e71-0de25f4604d0\") " pod="openshift-marketplace/community-operators-bxzjx" Mar 20 17:15:35 crc kubenswrapper[4730]: I0320 17:15:35.357709 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/791ba4a6-f2b3-4689-9e71-0de25f4604d0-catalog-content\") pod \"community-operators-bxzjx\" (UID: \"791ba4a6-f2b3-4689-9e71-0de25f4604d0\") " pod="openshift-marketplace/community-operators-bxzjx" Mar 20 17:15:35 crc kubenswrapper[4730]: I0320 17:15:35.357805 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62hrp\" (UniqueName: \"kubernetes.io/projected/791ba4a6-f2b3-4689-9e71-0de25f4604d0-kube-api-access-62hrp\") pod \"community-operators-bxzjx\" (UID: \"791ba4a6-f2b3-4689-9e71-0de25f4604d0\") " pod="openshift-marketplace/community-operators-bxzjx" Mar 20 17:15:35 crc kubenswrapper[4730]: I0320 17:15:35.357884 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/791ba4a6-f2b3-4689-9e71-0de25f4604d0-utilities\") pod \"community-operators-bxzjx\" (UID: \"791ba4a6-f2b3-4689-9e71-0de25f4604d0\") " pod="openshift-marketplace/community-operators-bxzjx" Mar 20 17:15:35 crc kubenswrapper[4730]: I0320 17:15:35.358496 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/791ba4a6-f2b3-4689-9e71-0de25f4604d0-utilities\") pod \"community-operators-bxzjx\" (UID: \"791ba4a6-f2b3-4689-9e71-0de25f4604d0\") " pod="openshift-marketplace/community-operators-bxzjx" Mar 20 17:15:35 crc kubenswrapper[4730]: I0320 17:15:35.358791 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/791ba4a6-f2b3-4689-9e71-0de25f4604d0-catalog-content\") pod \"community-operators-bxzjx\" (UID: \"791ba4a6-f2b3-4689-9e71-0de25f4604d0\") " pod="openshift-marketplace/community-operators-bxzjx" Mar 20 17:15:35 crc kubenswrapper[4730]: I0320 17:15:35.387895 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62hrp\" (UniqueName: \"kubernetes.io/projected/791ba4a6-f2b3-4689-9e71-0de25f4604d0-kube-api-access-62hrp\") pod \"community-operators-bxzjx\" (UID: \"791ba4a6-f2b3-4689-9e71-0de25f4604d0\") " pod="openshift-marketplace/community-operators-bxzjx" Mar 20 17:15:35 crc kubenswrapper[4730]: I0320 17:15:35.491891 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bxzjx" Mar 20 17:15:36 crc kubenswrapper[4730]: I0320 17:15:36.051688 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bxzjx"] Mar 20 17:15:36 crc kubenswrapper[4730]: I0320 17:15:36.060808 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bxzjx" event={"ID":"791ba4a6-f2b3-4689-9e71-0de25f4604d0","Type":"ContainerStarted","Data":"b13f5df7dabac5d1b59a2b83dfc3ee8119163948122309b07d794df7c6658400"} Mar 20 17:15:37 crc kubenswrapper[4730]: I0320 17:15:37.077314 4730 generic.go:334] "Generic (PLEG): container finished" podID="791ba4a6-f2b3-4689-9e71-0de25f4604d0" containerID="04e2ee9ec620bf90cddc552326a5dd3c11ecccc6463da10f015d8dca03fddc41" exitCode=0 Mar 20 17:15:37 crc kubenswrapper[4730]: I0320 17:15:37.077406 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bxzjx" event={"ID":"791ba4a6-f2b3-4689-9e71-0de25f4604d0","Type":"ContainerDied","Data":"04e2ee9ec620bf90cddc552326a5dd3c11ecccc6463da10f015d8dca03fddc41"} Mar 20 17:15:37 crc kubenswrapper[4730]: I0320 17:15:37.080227 4730 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 17:15:39 crc kubenswrapper[4730]: I0320 17:15:39.104770 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bxzjx" event={"ID":"791ba4a6-f2b3-4689-9e71-0de25f4604d0","Type":"ContainerStarted","Data":"20aaa39386bf4bfad8806bfbe95e228235dfe7287dd2492b93a9e3dfc6dd2c17"} Mar 20 17:15:40 crc kubenswrapper[4730]: I0320 17:15:40.116796 4730 generic.go:334] "Generic (PLEG): container finished" podID="791ba4a6-f2b3-4689-9e71-0de25f4604d0" containerID="20aaa39386bf4bfad8806bfbe95e228235dfe7287dd2492b93a9e3dfc6dd2c17" exitCode=0 Mar 20 17:15:40 crc kubenswrapper[4730]: I0320 17:15:40.116843 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bxzjx" event={"ID":"791ba4a6-f2b3-4689-9e71-0de25f4604d0","Type":"ContainerDied","Data":"20aaa39386bf4bfad8806bfbe95e228235dfe7287dd2492b93a9e3dfc6dd2c17"} Mar 20 17:15:41 crc kubenswrapper[4730]: I0320 17:15:41.127999 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bxzjx" event={"ID":"791ba4a6-f2b3-4689-9e71-0de25f4604d0","Type":"ContainerStarted","Data":"0b90ef7a58192d8a356acbacd08222c1dfbd6fb9675fd51426cc6e2ebaeb31f1"} Mar 20 17:15:41 crc kubenswrapper[4730]: I0320 17:15:41.165855 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bxzjx" podStartSLOduration=2.721705126 podStartE2EDuration="6.165836333s" podCreationTimestamp="2026-03-20 17:15:35 +0000 UTC" firstStartedPulling="2026-03-20 17:15:37.079851774 +0000 UTC m=+5796.293223173" lastFinishedPulling="2026-03-20 17:15:40.523982971 +0000 UTC m=+5799.737354380" observedRunningTime="2026-03-20 17:15:41.152573295 +0000 UTC m=+5800.365944674" watchObservedRunningTime="2026-03-20 17:15:41.165836333 +0000 UTC m=+5800.379207712" Mar 20 17:15:45 crc kubenswrapper[4730]: I0320 17:15:45.492335 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bxzjx" Mar 20 17:15:45 crc kubenswrapper[4730]: I0320 17:15:45.492974 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bxzjx" Mar 20 17:15:45 crc kubenswrapper[4730]: I0320 17:15:45.573483 4730 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bxzjx" Mar 20 17:15:46 crc kubenswrapper[4730]: I0320 17:15:46.258071 4730 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bxzjx" Mar 20 17:15:46 crc kubenswrapper[4730]: I0320 17:15:46.322368 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bxzjx"] Mar 20 17:15:48 crc kubenswrapper[4730]: I0320 17:15:48.208088 4730 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bxzjx" podUID="791ba4a6-f2b3-4689-9e71-0de25f4604d0" containerName="registry-server" containerID="cri-o://0b90ef7a58192d8a356acbacd08222c1dfbd6fb9675fd51426cc6e2ebaeb31f1" gracePeriod=2 Mar 20 17:15:48 crc kubenswrapper[4730]: I0320 17:15:48.974877 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bxzjx" Mar 20 17:15:49 crc kubenswrapper[4730]: I0320 17:15:49.134477 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/791ba4a6-f2b3-4689-9e71-0de25f4604d0-utilities\") pod \"791ba4a6-f2b3-4689-9e71-0de25f4604d0\" (UID: \"791ba4a6-f2b3-4689-9e71-0de25f4604d0\") " Mar 20 17:15:49 crc kubenswrapper[4730]: I0320 17:15:49.134597 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/791ba4a6-f2b3-4689-9e71-0de25f4604d0-catalog-content\") pod \"791ba4a6-f2b3-4689-9e71-0de25f4604d0\" (UID: \"791ba4a6-f2b3-4689-9e71-0de25f4604d0\") " Mar 20 17:15:49 crc kubenswrapper[4730]: I0320 17:15:49.134802 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62hrp\" (UniqueName: \"kubernetes.io/projected/791ba4a6-f2b3-4689-9e71-0de25f4604d0-kube-api-access-62hrp\") pod \"791ba4a6-f2b3-4689-9e71-0de25f4604d0\" (UID: \"791ba4a6-f2b3-4689-9e71-0de25f4604d0\") " Mar 20 17:15:49 crc kubenswrapper[4730]: I0320 17:15:49.135407 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/791ba4a6-f2b3-4689-9e71-0de25f4604d0-utilities" (OuterVolumeSpecName: "utilities") pod "791ba4a6-f2b3-4689-9e71-0de25f4604d0" (UID: "791ba4a6-f2b3-4689-9e71-0de25f4604d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 17:15:49 crc kubenswrapper[4730]: I0320 17:15:49.135613 4730 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/791ba4a6-f2b3-4689-9e71-0de25f4604d0-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 17:15:49 crc kubenswrapper[4730]: I0320 17:15:49.150595 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/791ba4a6-f2b3-4689-9e71-0de25f4604d0-kube-api-access-62hrp" (OuterVolumeSpecName: "kube-api-access-62hrp") pod "791ba4a6-f2b3-4689-9e71-0de25f4604d0" (UID: "791ba4a6-f2b3-4689-9e71-0de25f4604d0"). InnerVolumeSpecName "kube-api-access-62hrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 17:15:49 crc kubenswrapper[4730]: I0320 17:15:49.189834 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/791ba4a6-f2b3-4689-9e71-0de25f4604d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "791ba4a6-f2b3-4689-9e71-0de25f4604d0" (UID: "791ba4a6-f2b3-4689-9e71-0de25f4604d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 17:15:49 crc kubenswrapper[4730]: I0320 17:15:49.229938 4730 generic.go:334] "Generic (PLEG): container finished" podID="791ba4a6-f2b3-4689-9e71-0de25f4604d0" containerID="0b90ef7a58192d8a356acbacd08222c1dfbd6fb9675fd51426cc6e2ebaeb31f1" exitCode=0 Mar 20 17:15:49 crc kubenswrapper[4730]: I0320 17:15:49.230005 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bxzjx" event={"ID":"791ba4a6-f2b3-4689-9e71-0de25f4604d0","Type":"ContainerDied","Data":"0b90ef7a58192d8a356acbacd08222c1dfbd6fb9675fd51426cc6e2ebaeb31f1"} Mar 20 17:15:49 crc kubenswrapper[4730]: I0320 17:15:49.230043 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bxzjx" event={"ID":"791ba4a6-f2b3-4689-9e71-0de25f4604d0","Type":"ContainerDied","Data":"b13f5df7dabac5d1b59a2b83dfc3ee8119163948122309b07d794df7c6658400"} Mar 20 17:15:49 crc kubenswrapper[4730]: I0320 17:15:49.230070 4730 scope.go:117] "RemoveContainer" containerID="0b90ef7a58192d8a356acbacd08222c1dfbd6fb9675fd51426cc6e2ebaeb31f1" Mar 20 17:15:49 crc kubenswrapper[4730]: I0320 17:15:49.230418 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bxzjx" Mar 20 17:15:49 crc kubenswrapper[4730]: I0320 17:15:49.237105 4730 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/791ba4a6-f2b3-4689-9e71-0de25f4604d0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 17:15:49 crc kubenswrapper[4730]: I0320 17:15:49.237134 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62hrp\" (UniqueName: \"kubernetes.io/projected/791ba4a6-f2b3-4689-9e71-0de25f4604d0-kube-api-access-62hrp\") on node \"crc\" DevicePath \"\"" Mar 20 17:15:49 crc kubenswrapper[4730]: I0320 17:15:49.259522 4730 scope.go:117] "RemoveContainer" containerID="20aaa39386bf4bfad8806bfbe95e228235dfe7287dd2492b93a9e3dfc6dd2c17" Mar 20 17:15:49 crc kubenswrapper[4730]: I0320 17:15:49.278120 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bxzjx"] Mar 20 17:15:49 crc kubenswrapper[4730]: I0320 17:15:49.287075 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bxzjx"] Mar 20 17:15:49 crc kubenswrapper[4730]: I0320 17:15:49.308200 4730 scope.go:117] "RemoveContainer" containerID="04e2ee9ec620bf90cddc552326a5dd3c11ecccc6463da10f015d8dca03fddc41" Mar 20 17:15:49 crc kubenswrapper[4730]: I0320 17:15:49.342216 4730 scope.go:117] "RemoveContainer" containerID="0b90ef7a58192d8a356acbacd08222c1dfbd6fb9675fd51426cc6e2ebaeb31f1" Mar 20 17:15:49 crc kubenswrapper[4730]: E0320 17:15:49.347738 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b90ef7a58192d8a356acbacd08222c1dfbd6fb9675fd51426cc6e2ebaeb31f1\": container with ID starting with 0b90ef7a58192d8a356acbacd08222c1dfbd6fb9675fd51426cc6e2ebaeb31f1 not found: ID does not exist" containerID="0b90ef7a58192d8a356acbacd08222c1dfbd6fb9675fd51426cc6e2ebaeb31f1" Mar 20 17:15:49 crc kubenswrapper[4730]: I0320 17:15:49.347781 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b90ef7a58192d8a356acbacd08222c1dfbd6fb9675fd51426cc6e2ebaeb31f1"} err="failed to get container status \"0b90ef7a58192d8a356acbacd08222c1dfbd6fb9675fd51426cc6e2ebaeb31f1\": rpc error: code = NotFound desc = could not find container \"0b90ef7a58192d8a356acbacd08222c1dfbd6fb9675fd51426cc6e2ebaeb31f1\": container with ID starting with 0b90ef7a58192d8a356acbacd08222c1dfbd6fb9675fd51426cc6e2ebaeb31f1 not found: ID does not exist" Mar 20 17:15:49 crc kubenswrapper[4730]: I0320 17:15:49.347808 4730 scope.go:117] "RemoveContainer" containerID="20aaa39386bf4bfad8806bfbe95e228235dfe7287dd2492b93a9e3dfc6dd2c17" Mar 20 17:15:49 crc kubenswrapper[4730]: E0320 17:15:49.349851 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20aaa39386bf4bfad8806bfbe95e228235dfe7287dd2492b93a9e3dfc6dd2c17\": container with ID starting with 20aaa39386bf4bfad8806bfbe95e228235dfe7287dd2492b93a9e3dfc6dd2c17 not found: ID does not exist" containerID="20aaa39386bf4bfad8806bfbe95e228235dfe7287dd2492b93a9e3dfc6dd2c17" Mar 20 17:15:49 crc kubenswrapper[4730]: I0320 17:15:49.349900 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20aaa39386bf4bfad8806bfbe95e228235dfe7287dd2492b93a9e3dfc6dd2c17"} err="failed to get container status \"20aaa39386bf4bfad8806bfbe95e228235dfe7287dd2492b93a9e3dfc6dd2c17\": rpc error: code = NotFound desc = could not find container \"20aaa39386bf4bfad8806bfbe95e228235dfe7287dd2492b93a9e3dfc6dd2c17\": container with ID starting with 20aaa39386bf4bfad8806bfbe95e228235dfe7287dd2492b93a9e3dfc6dd2c17 not found: ID does not exist" Mar 20 17:15:49 crc kubenswrapper[4730]: I0320 17:15:49.349927 4730 scope.go:117] "RemoveContainer" containerID="04e2ee9ec620bf90cddc552326a5dd3c11ecccc6463da10f015d8dca03fddc41" Mar 20 17:15:49 crc kubenswrapper[4730]: E0320 17:15:49.350376 4730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04e2ee9ec620bf90cddc552326a5dd3c11ecccc6463da10f015d8dca03fddc41\": container with ID starting with 04e2ee9ec620bf90cddc552326a5dd3c11ecccc6463da10f015d8dca03fddc41 not found: ID does not exist" containerID="04e2ee9ec620bf90cddc552326a5dd3c11ecccc6463da10f015d8dca03fddc41" Mar 20 17:15:49 crc kubenswrapper[4730]: I0320 17:15:49.350429 4730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04e2ee9ec620bf90cddc552326a5dd3c11ecccc6463da10f015d8dca03fddc41"} err="failed to get container status \"04e2ee9ec620bf90cddc552326a5dd3c11ecccc6463da10f015d8dca03fddc41\": rpc error: code = NotFound desc = could not find container \"04e2ee9ec620bf90cddc552326a5dd3c11ecccc6463da10f015d8dca03fddc41\": container with ID starting with 04e2ee9ec620bf90cddc552326a5dd3c11ecccc6463da10f015d8dca03fddc41 not found: ID does not exist" Mar 20 17:15:49 crc kubenswrapper[4730]: I0320 17:15:49.557290 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="791ba4a6-f2b3-4689-9e71-0de25f4604d0" path="/var/lib/kubelet/pods/791ba4a6-f2b3-4689-9e71-0de25f4604d0/volumes" Mar 20 17:15:55 crc kubenswrapper[4730]: I0320 17:15:55.875210 4730 scope.go:117] "RemoveContainer" containerID="dc2c07b3766f06e0423270d40c09a7a028e4cbca82d59a060deedb7b5661816a" Mar 20 17:16:00 crc kubenswrapper[4730]: I0320 17:16:00.176446 4730 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567116-j4glj"] Mar 20 17:16:00 crc kubenswrapper[4730]: E0320 17:16:00.178624 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="791ba4a6-f2b3-4689-9e71-0de25f4604d0" containerName="registry-server" Mar 20 17:16:00 crc kubenswrapper[4730]: I0320 17:16:00.178653 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="791ba4a6-f2b3-4689-9e71-0de25f4604d0" containerName="registry-server" Mar 20 17:16:00 crc kubenswrapper[4730]: E0320 17:16:00.178704 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="791ba4a6-f2b3-4689-9e71-0de25f4604d0" containerName="extract-utilities" Mar 20 17:16:00 crc kubenswrapper[4730]: I0320 17:16:00.178713 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="791ba4a6-f2b3-4689-9e71-0de25f4604d0" containerName="extract-utilities" Mar 20 17:16:00 crc kubenswrapper[4730]: E0320 17:16:00.178728 4730 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="791ba4a6-f2b3-4689-9e71-0de25f4604d0" containerName="extract-content" Mar 20 17:16:00 crc kubenswrapper[4730]: I0320 17:16:00.178744 4730 state_mem.go:107] "Deleted CPUSet assignment" podUID="791ba4a6-f2b3-4689-9e71-0de25f4604d0" containerName="extract-content" Mar 20 17:16:00 crc kubenswrapper[4730]: I0320 17:16:00.178994 4730 memory_manager.go:354] "RemoveStaleState removing state" podUID="791ba4a6-f2b3-4689-9e71-0de25f4604d0" containerName="registry-server" Mar 20 17:16:00 crc kubenswrapper[4730]: I0320 17:16:00.179762 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567116-j4glj" Mar 20 17:16:00 crc kubenswrapper[4730]: I0320 17:16:00.182303 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 17:16:00 crc kubenswrapper[4730]: I0320 17:16:00.182644 4730 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 17:16:00 crc kubenswrapper[4730]: I0320 17:16:00.182896 4730 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-vlqbl" Mar 20 17:16:00 crc kubenswrapper[4730]: I0320 17:16:00.195724 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567116-j4glj"] Mar 20 17:16:00 crc kubenswrapper[4730]: I0320 17:16:00.281833 4730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt4cr\" (UniqueName: \"kubernetes.io/projected/c93b527b-9082-468d-8a7d-9c40023e92f4-kube-api-access-bt4cr\") pod \"auto-csr-approver-29567116-j4glj\" (UID: \"c93b527b-9082-468d-8a7d-9c40023e92f4\") " pod="openshift-infra/auto-csr-approver-29567116-j4glj" Mar 20 17:16:00 crc kubenswrapper[4730]: I0320 17:16:00.384581 4730 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt4cr\" (UniqueName: \"kubernetes.io/projected/c93b527b-9082-468d-8a7d-9c40023e92f4-kube-api-access-bt4cr\") pod \"auto-csr-approver-29567116-j4glj\" (UID: \"c93b527b-9082-468d-8a7d-9c40023e92f4\") " pod="openshift-infra/auto-csr-approver-29567116-j4glj" Mar 20 17:16:00 crc kubenswrapper[4730]: I0320 17:16:00.414601 4730 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt4cr\" (UniqueName: \"kubernetes.io/projected/c93b527b-9082-468d-8a7d-9c40023e92f4-kube-api-access-bt4cr\") pod \"auto-csr-approver-29567116-j4glj\" (UID: \"c93b527b-9082-468d-8a7d-9c40023e92f4\") " pod="openshift-infra/auto-csr-approver-29567116-j4glj" Mar 20 17:16:00 crc kubenswrapper[4730]: I0320 17:16:00.507447 4730 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567116-j4glj" Mar 20 17:16:00 crc kubenswrapper[4730]: I0320 17:16:00.831236 4730 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567116-j4glj"] Mar 20 17:16:00 crc kubenswrapper[4730]: W0320 17:16:00.841538 4730 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc93b527b_9082_468d_8a7d_9c40023e92f4.slice/crio-22953afd2d795cdc6d80eed1f4ba1a4fdf6c8530363cff9b77f1fe7cfdc698fc WatchSource:0}: Error finding container 22953afd2d795cdc6d80eed1f4ba1a4fdf6c8530363cff9b77f1fe7cfdc698fc: Status 404 returned error can't find the container with id 22953afd2d795cdc6d80eed1f4ba1a4fdf6c8530363cff9b77f1fe7cfdc698fc Mar 20 17:16:01 crc kubenswrapper[4730]: I0320 17:16:01.373730 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567116-j4glj" event={"ID":"c93b527b-9082-468d-8a7d-9c40023e92f4","Type":"ContainerStarted","Data":"22953afd2d795cdc6d80eed1f4ba1a4fdf6c8530363cff9b77f1fe7cfdc698fc"} Mar 20 17:16:02 crc kubenswrapper[4730]: I0320 17:16:02.385685 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567116-j4glj" event={"ID":"c93b527b-9082-468d-8a7d-9c40023e92f4","Type":"ContainerStarted","Data":"4d43c7d4bdb98f9e776981a409e6b423d12e3f46827a144b8d0da0bc996bfc87"} Mar 20 17:16:02 crc kubenswrapper[4730]: I0320 17:16:02.420774 4730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29567116-j4glj" podStartSLOduration=1.4968243270000001 podStartE2EDuration="2.420752116s" podCreationTimestamp="2026-03-20 17:16:00 +0000 UTC" firstStartedPulling="2026-03-20 17:16:00.845447844 +0000 UTC m=+5820.058819213" lastFinishedPulling="2026-03-20 17:16:01.769375593 +0000 UTC m=+5820.982747002" observedRunningTime="2026-03-20 17:16:02.408158297 +0000 UTC m=+5821.621529696" watchObservedRunningTime="2026-03-20 17:16:02.420752116 +0000 UTC m=+5821.634123485" Mar 20 17:16:03 crc kubenswrapper[4730]: I0320 17:16:03.397766 4730 generic.go:334] "Generic (PLEG): container finished" podID="c93b527b-9082-468d-8a7d-9c40023e92f4" containerID="4d43c7d4bdb98f9e776981a409e6b423d12e3f46827a144b8d0da0bc996bfc87" exitCode=0 Mar 20 17:16:03 crc kubenswrapper[4730]: I0320 17:16:03.398146 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567116-j4glj" event={"ID":"c93b527b-9082-468d-8a7d-9c40023e92f4","Type":"ContainerDied","Data":"4d43c7d4bdb98f9e776981a409e6b423d12e3f46827a144b8d0da0bc996bfc87"} Mar 20 17:16:04 crc kubenswrapper[4730]: I0320 17:16:04.891076 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567116-j4glj" Mar 20 17:16:04 crc kubenswrapper[4730]: I0320 17:16:04.997301 4730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt4cr\" (UniqueName: \"kubernetes.io/projected/c93b527b-9082-468d-8a7d-9c40023e92f4-kube-api-access-bt4cr\") pod \"c93b527b-9082-468d-8a7d-9c40023e92f4\" (UID: \"c93b527b-9082-468d-8a7d-9c40023e92f4\") " Mar 20 17:16:05 crc kubenswrapper[4730]: I0320 17:16:05.022785 4730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c93b527b-9082-468d-8a7d-9c40023e92f4-kube-api-access-bt4cr" (OuterVolumeSpecName: "kube-api-access-bt4cr") pod "c93b527b-9082-468d-8a7d-9c40023e92f4" (UID: "c93b527b-9082-468d-8a7d-9c40023e92f4"). InnerVolumeSpecName "kube-api-access-bt4cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 17:16:05 crc kubenswrapper[4730]: I0320 17:16:05.101708 4730 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt4cr\" (UniqueName: \"kubernetes.io/projected/c93b527b-9082-468d-8a7d-9c40023e92f4-kube-api-access-bt4cr\") on node \"crc\" DevicePath \"\"" Mar 20 17:16:05 crc kubenswrapper[4730]: I0320 17:16:05.426042 4730 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567116-j4glj" event={"ID":"c93b527b-9082-468d-8a7d-9c40023e92f4","Type":"ContainerDied","Data":"22953afd2d795cdc6d80eed1f4ba1a4fdf6c8530363cff9b77f1fe7cfdc698fc"} Mar 20 17:16:05 crc kubenswrapper[4730]: I0320 17:16:05.426094 4730 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22953afd2d795cdc6d80eed1f4ba1a4fdf6c8530363cff9b77f1fe7cfdc698fc" Mar 20 17:16:05 crc kubenswrapper[4730]: I0320 17:16:05.426116 4730 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567116-j4glj" Mar 20 17:16:05 crc kubenswrapper[4730]: I0320 17:16:05.977687 4730 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567110-n8lgt"] Mar 20 17:16:05 crc kubenswrapper[4730]: I0320 17:16:05.994559 4730 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567110-n8lgt"] Mar 20 17:16:07 crc kubenswrapper[4730]: I0320 17:16:07.553074 4730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe10df03-e254-4075-9487-78370bdbdf87" path="/var/lib/kubelet/pods/fe10df03-e254-4075-9487-78370bdbdf87/volumes" Mar 20 17:16:12 crc kubenswrapper[4730]: I0320 17:16:12.880788 4730 patch_prober.go:28] interesting pod/machine-config-daemon-p5qvf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 17:16:12 crc kubenswrapper[4730]: I0320 17:16:12.881614 4730 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5qvf" podUID="7fcd3db3-55f1-4c23-8fa9-78844495cea3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"